Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Programmable Keyboards
Softpanorama sysadmin utilities Saferm -- wrapper for rm command Neatbash -- a simple bash prettyprinter Neatperl -- a simple Perl prettyprinter Pythonizer: Translator from Perl to Python Top Vulnerabilities in Linux Environment Registering a server using Red Hat Subscription Manager (RHSM)
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Perl Wiki as a System Administrator Tool Frontpage as a poor man personal knowledge management system Information is not knowledge Static Web site content generators
Red Hat Certification Program Notes on RHCSA Certification for RHEL 7 Red Hat Enterprise Linux Life Cycle Recovery of LVM partitions Notes on hard drives partitioning for Linux Troubleshooting HPOM agents Root filesystem is mounted read only on boot
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoers File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Unix System Monitoring Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM Resetting frozen iDRAC without unplugging the server ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Recommended Tools to Enhance Command Line Usage in Windows Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bozos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workagolism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment, some new "for profit" technocults  like DevOps, and outsourcing.  Large swats of Linux knowledge (and many excellent  books)  were  made obsolete by Red Hat with the introduction of systemd. Especially affected are older, most experienced members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide.

But the conditions are getting worse and worse. That's why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1. Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Excessive automation can be a problem. It increases the number of layers between fundamental process and sysadmin. and thus it makes troubleshooting much harder. Moreover often it does not produce tangible benefits in comparison with simpler tools while dramatically increasing the level of complexity of environment.  See Unix Configuration Management Tools for deeper discussion of this issue.
  3. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of "efficiency" which in reality is more connected with the size of top brass bonuses than anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case they are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. Often the only job they can find is job  in Home Depot, or similar retail outlets.  See Over 50 and unemployed
  4. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). "Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface shine is more important than the substance. The marketing brochures and manuals are no different from mainstream news media stories in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  5. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.  See Office Space (1999) - IMDb  for humorous take on IT culture.  That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has  tremendously more power in the decision-making process and eats up more of the corporate budget, while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks. Purchasing of "non-standard" software or hardware is often so complicated that it never tried even if benefits are tangible.
  6. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.  They are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink, and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand beach, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version or organizational change wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to switch to a different flavor of Linux,  or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  For a talented and not too old person staying employed in sysadmin profession is probably a mistake, or at least a very questionable decision.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Of course summer "dead season" at least partially exists, but it is rather short. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them is of course is the mark of any good sysadmin, but should not be the only source of new knowledge. Communication with colleagues who have high level of knowledge in selected areas is as important or even more important. But this is very difficult as often sysadmin works in isolation.  Professional groups like Linux user group exist mostly in metropolitan areas of large cities. Coronavirus made those groups even more problematic.

Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (AWS, Azure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course.  sysadmins are not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course, much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM.

Nobody is now surprised to see a server with 128GB of RAM, laptop with 16Gb of RAM, or cellphones with  4GB of RAM and 2GHZ CPU (Please note that IBM Pc stated with 1 MB of RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Hardware evolution while painful is inevitable and it changes the software landscape. Thanks God hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes) and progress now is mostly measured by the number of cores packed in the same die.

The there is other set of significant changes which is course not by progress of hardware (or software) but mainly by fashion and the desire of certain (and powerful) large corporations to entrench their market position. Such changes are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow. For example how long DevOps will remain in fashion.

Typically such techno-fashion lasts around a decade. After that it typically fades in oblivion,  or even is debunked, and former idols shattered (verification crazy is a nice example here). Fro example this strange re-invention of the ideas of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for women fashion.  Moreover sometimes it looks to me that the movie The Devil Wears Prada is a subtle parable on sysadmin work.

Add to this horrible job market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or often such a job position is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to fit this specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also often position are available mostly in place like New York of San Francisco, were both rent and property prices are high and growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done to enhance Red Hat marketing position, in the interests of the Red Hat and IBM brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL administration other than for a solid advance.  Of course, systemd  generated some backlash, but the position  of Red Hat as Microsoft of Linux allows them to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous Windows interface ecosystem and putting keyboard users into some disadvantage  (while preserving binary compatibility). Red Hat essentially did the same for server sysadmins.

Dr. Nikolai Bezroukov

P.S. See also

P.P.S. Here are my notes/reflection of sysadmin problems that often arise in rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

Highly relevant job about life of a sysadmin: "I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

If you are frustrated read Admin Humor

[Feb 20, 2021] Improve your productivity with this Linux keyboard tool - Opensource.com

Feb 20, 2021 | opensource.com

AutoKey is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing.

This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive.

Install and set up AutoKey

AutoKey is available as a software package on many Linux distributions. The project's installation guide contains directions for many platforms, including building from source. This article uses Fedora as the operating platform.

AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is QT -based.

You can install either variant from the command line:

sudo dnf install autokey-gtk

Once it's installed, run it by using autokey-gtk (or autokey-qt ).

Explore the interface

Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI):

autokey-gtk -c

AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish.

autokey-defaults.png

(Matt Bargenquast, CC BY-SA 4.0 )

The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window.

The right pane is where the phrases and scripts are built and configured.

Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login .

startautokey.png

(Matt Bargenquast, CC BY-SA 4.0 ) Correct common typos with AutoKey More Linux resources

Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you.

Create a new subfolder where you can group all your "typo correction" configurations. Select My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos .

Create a new phrase in File -> New -> Phrase , and call it "grep."

Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the Enter phrase contents section (replacing the default "Enter phrase contents" text).

Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set button next to Abbreviations at the bottom of the UI.

In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation. Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word.

[Feb 19, 2021] Installs only security updates via yum

The simplest way is # yum -y update --security
For RHEL7 the plugin yum-plugin-security is already a part of yum itself, no need to install anything.
Jan 16, 2020 | access.redhat.com

It is now possible to limit yum to install only security updates (as opposed to bug fixes or enhancements) using Red Hat Enterprise Linux 5,6, and 7. To do so, simply install the yum-security plugin:

For Red Hat Enterprise Linux 7 and 8

The plugin is already a part of yum itself, no need to install anything.

For Red Hat Enterprise Linux 5 and 6

# yum install yum-security
Raw
# yum list-sec
Raw
# yum list-security --security

For Red Hat Enterprise Linux 5, 6, 7 and 8

Raw
# yum updateinfo info security
Raw
# yum -y update --security

NOTE: It will install the last version available of any package with at least one security errata thus can install non-security erratas if they provide a more updated version of the package.

Raw
# yum update-minimal --security -y
Raw
# yum update --cve <CVE>

e.g.

Raw
# yum update --cve CVE-2008-0947

11 September 2014 5:30 PM R. Hinton Community Leader

For those seeking to discover what CVEs are addressed in a given existing RPM, try this method that Marc Milgram from Red Hat kindly provided at this discussion .

1) First download the specific rpm you are interested in.
2) Use the below command...

Raw
$ rpm -qp --changelog openssl-0.9.8e-27.el5_10.4.x86_64.rpm | grep CVE
- fix CVE-2014-0221 - recursion in DTLS code leading to DoS
- fix CVE-2014-3505 - doublefree in DTLS packet processing
- fix CVE-2014-3506 - avoid memory exhaustion in DTLS
- fix CVE-2014-3508 - fix OID handling to avoid information leak
- fix CVE-2014-3510 - fix DoS in anonymous (EC)DH handling in DTLS
- fix for CVE-2014-0224 - SSL/TLS MITM vulnerability
- fix for CVE-2013-0169 - SSL/TLS CBC timing attack (#907589)
- fix for CVE-2013-0166 - DoS in OCSP signatures checking (#908052)
  environment variable is set (fixes CVE-2012-4929 #857051)
- fix for CVE-2012-2333 - improper checking for record length in DTLS (#820686)
- fix for CVE-2012-2110 - memory corruption in asn1_d2i_read_bio() (#814185)
- fix for CVE-2012-0884 - MMA weakness in CMS and PKCS#7 code (#802725)
- fix for CVE-2012-1165 - NULL read dereference on bad MIME headers (#802489)
- fix for CVE-2011-4108 & CVE-2012-0050 - DTLS plaintext recovery
- fix for CVE-2011-4109 - double free in policy checks (#771771)
- fix for CVE-2011-4576 - uninitialized SSL 3.0 padding (#771775)
- fix for CVE-2011-4619 - SGC restart DoS attack (#771780)
- fix CVE-2010-4180 - completely disable code for
- fix CVE-2009-3245 - add missing bn_wexpand return checks (#570924)
- fix CVE-2010-0433 - do not pass NULL princ to krb5_kt_get_entry which
- fix CVE-2009-3555 - support the safe renegotiation extension and
- fix CVE-2009-2409 - drop MD2 algorithm from EVP tables (#510197)
- fix CVE-2009-4355 - do not leak memory when CRYPTO_cleanup_all_ex_data()
- fix CVE-2009-1386 CVE-2009-1387 (DTLS DoS problems)
- fix CVE-2009-1377 CVE-2009-1378 CVE-2009-1379
- fix CVE-2009-0590 - reject incorrectly encoded ASN.1 strings (#492304)
- fix CVE-2008-5077 - incorrect checks for malformed signatures (#476671)
- fix CVE-2007-3108 - side channel attack on private keys (#250581)
- fix CVE-2007-5135 - off-by-one in SSL_get_shared_ciphers (#309881)
- fix CVE-2007-4995 - out of order DTLS fragments buffer overflow (#321221)
- CVE-2006-2940 fix was incorrect (#208744)
- fix CVE-2006-2937 - mishandled error on ASN.1 parsing (#207276)
- fix CVE-2006-2940 - parasitic public keys DoS (#207274)
- fix CVE-2006-3738 - buffer overflow in SSL_get_shared_ciphers (#206940)
- fix CVE-2006-4343 - sslv2 client DoS (#206940)
- fix CVE-2006-4339 - prevent attack on PKCS#1 v1.5 signatures (#205180)
11 September 2014 5:34 PM R. Hinton Community Leader

Additionally,

If you are interested to see if a given CVE, or list of CVEs are applicable, you can use this method:

1) get the list of all applicable CVEs from Red Hat you wish,
- If you wanted to limit the search to a specific rpm such as "openssl", then at that above Red Hat link, you can enter "openssl" and filter out only openssl items, or filter against any other search term
- Place these into a file, one line after another, such as this limited example:
NOTE : These CVEs below are from limiting the CVEs to "openssl" in the manner I described above, and the list is not completed, there are plenty more for your date range .

Raw
CVE-2014-0016
CVE-2014-0017
CVE-2014-0036
CVE-2014-0041
...

2) Keep in mind the information in the article in this page, and run something like the following as root (a "for loop" will work in a bash shell):

Raw
[root@yoursystem]# for i in `cat listofcves.txt`;yum update --cve $i;done

And if the cve applies, it will prompt you to take the update, if it does not apply, it will tell you

Alternatively, I used this "echo n |" prior to the "yum update" exit the yum command with "n" if it found a hit:

Raw
[root@yoursystem]# for i in `cat listyoumade.txt`;echo n |yum update --cve $i;done

Then redirect the output to another file to make your determinations.

7 January 2015 9:54 AM f3792625

'yum info-sec' actually lists all patches, you need to use 'yum info-sec --security'

10 February 2016 1:00 PM Rackspace Customer

How is this the Severity information of RHSA updated populated?

Specifically the article shows the following output:

Raw
# yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 Important/Sec. kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64

On all of my systems, the output seems to be missing the severity information:

Raw
# yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 security       kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 security       mysql-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64

I can't see how to configure it to transform "security" to "Severity/Sec."

20 September 2016 8:27 AM Walid Shaari

same in here, what I did was use info-sec with filters, like below: Raw

test-node# yum info-sec|grep  'Critical:'
  Critical: glibc security and bug fix update
  Critical: samba and samba4 security, bug fix, and enhancement update
  Critical: samba security update
  Critical: samba security update
  Critical: nss and nspr security, bug fix, and enhancement update
  Critical: nss, nss-util, and nspr security update
  Critical: nss-util security update
  Critical: samba4 security update
20 June 2017 1:49 PM b.scalio

What's annoying is that "yum update --security" shows 20 packages to update for security but when listing the installable errata in Satellite it shows 102 errata available and yet all those errata don't contain the errata.

20 June 2017 2:05 PM Pavel Moravec

You might hit https://bugzilla.redhat.com/show_bug.cgi?id=1408508 where metadata generated has empty package list for some errata in some circumstances, causing yum thinks such an errata is not applicable (as no package would be updated by applying that errata).

I recommend finding out one of the errata that Sat WebUI offers but yum isnt aware of, and (z)grep that errata id within yum cache - if there will be something like:

Raw
<pkglist>
  <collection short="">
    <name>rhel-7-server-rpms__7Server__x86_64</name>
  </collection>
</pkglist>

with no package in it, you hit that bug.

14 August 2017 1:25 AM PixelDrift.NET Support Community Leader

I've got an interesting requirement in that a customer wants to only allow updates of packages with attached security errata (to limit unecessary drift/update of the OS platform). ie. restrict, warn or block the use of generic 'yum update' by an admin as it will update all packages.

There are other approaches which I have currently implemented, including limiting what is made available to the servers through Satellite so yum update doesn't 'see' non security errata.. but I guess what i'm really interested in is limiting (through client config) the inadvertant use "yum update" by an administrator, or redirecting/mapping 'yum update' to 'yum update --security'. I appreciate an admin can work around any restriction, but it's really to limit accidental use of full 'yum update' by well intentioned admins.

Current approaches are to alias yum, move yum and write a shim in its place (to warn/redirect if yum update is called), or patch the yum package itself (which i'd like to avoid). Any other suggestions appreciated.

16 January 2018 5:00 PM DSI POMONA

why not creating a specific content-view for security patch purpose ?

In that content-view, you create a filter that filters only security updates.

In your patch management process, you can create a script that change on the fly the content-view of a host (or host-group) then apply security patches, and finally switching back to the original content-view (if you let to the admin the possibility to install additional programms if necessary).

hope this helps

12 March 2018 2:25 PM Rackspace Customer IN Newbie 14 points
15 August 2019 12:12 AM IT Accounts NCVER

Hi,

Is it necessary to reboot system after applying security updates ?

15 August 2019 1:17 AM Marcus West

If it's a kernel update, you will have to. For other packages, it's recommended as to ensure that you are not still running the old libraries in memory. If you are just patching one particular independent service (ie, http), you can probably get away without a full system reboot.

More information can be found in the solution Which packages require a system reboot after the update? .

[Feb 03, 2021] A new userful bussword -- Hyper-converged infrastructure

Feb 03, 2021 | en.wikipedia.org

From Wikipedia, the free encyclopedia Jump to navigation Jump to search

Hyper-converged infrastructure ( HCI ) is a software-defined IT infrastructure that virtualizes all of the elements of conventional " hardware -defined" systems. HCI includes, at a minimum, virtualized computing (a hypervisor ), software-defined storage and virtualized networking ( software-defined networking ). [ citation needed ] HCI typically runs on commercial off-the-shelf (COTS) servers.

The primary difference between converged infrastructure (CI) and hyper-converged infrastructure is that in HCI, both the storage area network and the underlying storage abstractions are implemented virtually in software (at or via the hypervisor) rather than physically, in hardware. [ citation needed ] Because all of the software-defined elements are implemented within the context of the hypervisor, management of all resources can be federated (shared) across all instances of a hyper-converged infrastructure. Expected benefits [ edit ]

Hyperconvergence evolves away from discrete, hardware-defined systems that are connected and packaged together toward a purely software-defined environment where all functional elements run on commercial, off-the-shelf (COTS) servers, with the convergence of elements enabled by a hypervisor. [1] [2] HCI infrastructures are usually made up of server systems equipped with Direct-Attached Storage (DAS) . [3] HCI includes the ability to plug and play into a data-center pool of like systems. [4] [5] All physical data-center resources reside on a single administrative platform for both hardware and software layers. [6] Consolidation of all functional elements at the hypervisor level, together with federated management, eliminates traditional data-center inefficiencies and reduces the total cost of ownership (TCO) for data centers. [7] [ need quotation to verify ] [8] [9]

Potential impact [ edit ]

The potential impact of the hyper-converged infrastructure is that companies will no longer need to rely on different compute and storage systems, though it is still too early to prove that it can replace storage arrays in all market segments. [10] It is likely to further simplify management and increase resource-utilization rates where it does apply. [11] [12] [13]

[Feb 02, 2021] A Guide to systemd journal clean up process

Images removed. See the original for full version.
Jan 29, 2021 | www.debugpoint.com

... ... ...

The systemd journal Maintenance

Using the journalctl utility of systemd, you can query these logs, perform various operations on them. For example, viewing the log files from different boots, check for last warnings, errors from a specific process or applications. If you are unaware of these, I would suggest you quickly go through this tutorial – "use journalctl to View and Analyze Systemd Logs [With Examples] " before you follow this guide.

Where are the physical journal log files?

The systemd's journald daemon collects logs from every boot. That means, it classifies the log files as per the boot.

The logs are stored as binary in the path /var/log/journal with a folder as machine id.

For example:

Screenshot of physical journal file -1
Screenshot of physical journal files -2

Also, remember that based on system configuration, runtime journal files are stored at /run/log/journal/ . And these are removed in each boot.

Can I manually delete the log files?

You can, but don't do it. Instead, follow the below instructions to clear the log files to free up disk space using journalctl utilities.

How much disk space is used by systemd log files?

Open up a terminal and run the below command.

journalctl --disk-usage

This should provide you how much is actually used by the log files in your system.

If you have a graphical desktop environment, you can open the file manager and browse to the path /var/log/journal and check the properties.

systemd journal clean process

The effective way of clearing the log files should be done by journald.conf configuration file. Ideally, you should not manually delete the log files even if the journalctl provides utility to do that.

Let's take a look at how you can delete it manually , then I will explain the configuration changes in journald.conf so that you do not need to manually delete the files from time to time; Instead, the systemd takes care of it automatically based on your configuration.

Manual delete

First, you have to flush and rotate the log files. Rotating is a way of marking the current active log files as an archive and create a fresh logfile from this moment. The flush switch asks the journal daemon to flush any log data stored in /run/log/journal/ into /var/log/journal/ , if persistent storage is enabled.

SEE ALSO: Manage Systemd Services Using systemctl [With Examples]

Then, after flush and rotate, you need to run journalctl with vacuum-size , vacuum-time , and vacuum-files switches to force systemd to clear the logs.

Example 1:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-time=1s

The above set of commands removes all archived journal log files until the last second. This effectively clears everything. So, careful while running the command.

journal clean up – example

After clean up:

After clean up – journal space usage

You can also provide the following suffixes as per your need following the number.

Example 2:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-size=400M

This clears all archived journal log files and retains the last 400MB files. Remember this switch applies to only archived log files only, not on active journal files. You can also use suffixes as below.

Example 3:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-files=2

The vacuum-files switch clears all the journal files below the number specified. So, in the above example, only the last 2 journal files are kept and everything else is removed. Again, this only works on the archived files.

You can combine the switches if you want, but I would recommend not to. However, make sure to run with --rotate switch first.

Automatic delete using config files

While the above methods are good and easy to use, but it is recommended that you control the journal log file cleanup process using the journald configuration files which present at /etc/systemd/journald.conf .

The systemd provides many parameters for you to effectively manage the log files. By combining these parameters you can effectively limit the disk space used by the journal files. Let's take a look.

journald.conf parameter Description Example
SystemMaxUse Specifies the maximum disk space that can be used by the journal in persistent storage SystemMaxUse=500M
SystemKeepFree Specifies the amount of space that the journal should leave free when adding journal entries to persistent storage. SystemKeepFree=100M
SystemMaxFileSize Controls how large individual journal files can grow to in persistent storage before being rotated. SystemMaxFileSize=100M
RuntimeMaxUse Specifies the maximum disk space that can be used in volatile storage (within the /run filesystem). RuntimeMaxUse=100M
RuntimeKeepFree Specifies the amount of space to be set aside for other uses when writing data to volatile storage (within the /run filesystem). RuntimeMaxUse=100M
RuntimeMaxFileSize Specifies the amount of space that an individual journal file can take up in volatile storage (within the /run filesystem) before being rotated. RuntimeMaxFileSize=200M

If you add these values in a running system in /etc/systemd/journald.conf file, then you have to restart the journald after updating the file. To restart use the following command.

sudo systemctl restart systemd-journald
Verification of log files

It is wiser to check the integrity of the log files after you clean up the files. To do that run the below command. The command shows the PASS, FAIL against the journal file.

journalctl --verify

... ... ...

[Feb 02, 2021] 5 Most Notable Open Source Centralized Log Management Tools, by James Kiarie

Feb 01, 2021 | www.tecmint.com

... ... ...

1. Elastic Stack ( Elasticsearch Logstash & Kibana)

Elastic Stack , commonly abbreviated as ELK , is a popular three-in-one log centralization, parsing, and visualization tool that centralizes large sets of data and logs from multiple servers into one server.

ELK stack comprises 3 different products:

Logstash

Logstash is a free and open-source data pipeline that collects logs and events data and even processes and transforms the data to the desired output. Data is sent to logstash from remote servers using agents called ' beats '. The ' beats ' ship a huge volume of system metrics and logs to Logstash whereupon they are processed. It then feeds the data to Elasticsearch .

Elasticsearch

Built on Apache Lucene , Elasticsearch is an open-source and distributed search and analytics engine for nearly all types of data – both structured and unstructured. This includes textual, numerical, and geospatial data.

It was first released in 2010. Elasticsearch is the central component of the ELK stack and is renowned for its speed, scalability, and REST APIs. It stores, indexes, and analyzes huge volumes of data passed on from Logstash .

Kibana

Data is finally passed on to Kibana , which is a WebUI visualization platform that runs alongside Elasticsearch . Kibana allows you to explore and visualize time-series data and logs from elasticsearch. It visualizes data and logs on intuitive dashboards which take various forms such as bar graphs, pie charts, histograms, etc.

Related Read : How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 8/7

2. Graylog

Graylog is yet another popular and powerful centralized log management tool that comes with both open-source and enterprise plans. It accepts data from clients installed on multiple nodes and, just like Kibana , visualizes the data on dashboards on a web interface.

Graylogs plays a monumental role in making business decisions touching on user interaction of a web application. It collects vital analytics on the apps' behavior and visualizes the data on various graphs such as bar graphs, pie charts, and histograms to mention a few. The data collected inform key business decisions.

For example, you can determine peak hours when customers place orders using your web application. With such insights in hand, the management can make informed business decisions to scale up revenue.

Unlike Elastic Search , Graylog offers a single-application solution in data collection, parsing, and visualization. It rids the need for installation of multiple components unlike in ELK stack where you have to install individual components separately. Graylog collects and stores data in MongoDB which is then visualized on user-friendly and intuitive dashboards.

Graylog is widely used by developers in different phases of app deployment in tracking the state of web applications and obtaining information such as request times, errors, etc. This helps them to modify the code and boost performance.

3. Fluentd

Written in C, Fluentd is a cross-platform and opensource log monitoring tool that unifies log and data collection from multiple data sources. It's completely opensource and licensed under the Apache 2.0 license. In addition, there's a subscription model for enterprise use.

Fluentd processes both structured and semi-structured sets of data. It analyzes application logs, events logs, clickstreams and aims to be a unifying layer between log inputs and outputs of varying types.

It structures data in a JSON format allowing it to seamlessly unify all facets of data logging including the collection, filtering, parsing, and outputting logs across multiple nodes.

Fluentd comes with a small footprint and is resource-friendly, so you won't have to worry about running out of memory or your CPU being overutilized. Additionally, it boasts of a flexible plugin architecture where users can take advantage of over 500 community-developed plugins to extend its functionality.

4. LOGalyze

LOGalyze is a powerful network monitoring and log management tool that collects and parses logs from network devices, Linux, and Windows hosts. It was initially commercial but is now completely free to download and install without any limitations.

LOGalyze is ideal for analyzing server and application logs and presents them in various report formats such as PDF, CSV, and HTML. It also provides extensive search capabilities and real-time event detection of services across multiple nodes.

Like the aforementioned log monitoring tools, LOGalyze also provides a neat and simple web interface that allows users to log in and monitor various data sources and analyze log files .

5. NXlog

NXlog is yet another powerful and versatile tool for log collection and centralization. It's a multi-platform log management utility that is tailored to pick up policy breaches, identify security risks and analyze issues in system, application, and server logs.

NXlog has the capability of collating events logs from numerous endpoints in varying formats including Syslog and windows event logs. It can perform a range of log related tasks such as log rotation, log rewrites. log compression and can also be configured to send alerts.

You can download NXlog in two editions: The community edition, which is free to download, and use, and the enterprise edition which is subscription-based.

Tags Linux Log Analyzer , Linux Log Management , Linux Log Monitoring If you liked this article, then do subscribe to email alerts for Linux tutorials. If you have any questions or doubts? do ask for help in the comments section.

[Jan 27, 2021] Make Bash history more useful with these tips by Seth Kenlon

Notable quotes:
"... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
Jun 25, 2020 | opensource.com

To block adding a command to the history entries, you can place a space before the command, as long as you have ignorespace in your HISTCONTROL environment variable:

$ history | tail
535 echo "foo"
536 echo "bar"
$ history -d 536
$ history | tail
535 echo "foo"

You can clear your entire session history with the -c option:

$ history -c
$ history
$ History lessons More on Bash Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong.

Use your history sessions as needed, and exercise your power over history wisely. Happy history hacking!

[Jan 03, 2021] 9 things to do in your first 10 minutes on a new to you server

Jan 03, 2021 | opensource.com

1. First contact

As soon as I log into a server, the first thing I do is check whether it has the operating system, kernel, and hardware architecture needed for the tests I will be running. I often check how long a server has been up and running. While this does not matter very much for a test system because it will be rebooted multiple times, I still find this information helpful.

Use the following commands to get this information. I mostly use Red Hat Linux for testing, so if you are using another Linux distro, use *-release in the filename instead of redhat-release :

cat / etc / redhat-release
uname -a
hostnamectl
uptime 2. Is anyone else on board?

Once I know that the machine meets my test needs, I need to ensure no one else is logged into the system at the same time running their own tests. Although it is highly unlikely, given that the provisioning system takes care of this for me, it's still good to check once in a while -- especially if it's my first time logging into a server. I also check whether there are other users (other than root) who can access the system.

Use the following commands to find this information. The last command looks for users in the /etc/passwd file who have shell access; it skips other services in the file that do not have shell access or have a shell set to nologin :

who
who -Hu
grep sh $ / etc / passwd 3. Physical or virtual machine

Now that I know I have the machine to myself, I need to identify whether it's a physical machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I have what I asked for. However, if you are using a machine that you did not provision, you should check whether the machine is physical or virtual.

Use the following commands to identify this information. If it's a physical system, you will see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization software was used to create the VM:

dmidecode -s system-manufacturer
dmidecode -s system-product-name
lshw -c system | grep product | head -1
cat / sys / class / dmi / id / product_name
cat / sys / class / dmi / id / sys_vendor 4. Hardware

Because I often test hardware connected to the Linux machine, I usually work with physical servers, not VMs. On a physical machine, my next step is to identify the server's hardware capabilities -- for example, what kind of CPU is running, how many cores does it have, which flags are enabled, and how much memory is available for running tests. If I am running network tests, I check the type and capacity of the Ethernet or other network devices connected to the server.

Use the following commands to display the hardware connected to a Linux server. Some of the commands might be deprecated in newer operating system versions, but you can still install them from yum repos or switch to their equivalent new commands:

lscpu or cat / proc / cpuinfo
lsmem or cat / proc / meminfo
ifconfig -a
ethtool < devname >
lshw
lspci
dmidecode 5. Installed software

Testing software always requires installing additional dependent packages, libraries, etc. However, before I install anything, I check what is already installed (including what version it is), as well as which repos are configured, so I know where the software comes from, and I can debug any package installation issues.

Use the following commands to identify what software is installed:

rpm -qa
rpm -qa | grep < pkgname >
rpm -qi < pkgname >
yum repolist
yum repoinfo
yum install < pkgname >
ls -l / etc / yum.repos.d / 6. Running processes and services

Once I check the installed software, it's natural to check what processes are running on the system. This is crucial when running a performance test on a system -- if a running process, daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that process before running the tests. This also checks that the processes or daemons the test requires are up and running. For example, if the tests require httpd to be running, the service to start the daemon might not have run even if the package is installed.

Use the following commands to identify running processes and enabled services on your system:

pstree -pa 1
ps -ef
ps auxf
systemctl 7. Network connections

Today's machines are heavily networked, and they need to communicate with other machines or services on the network. I identify which ports are open on the server, if there are any connections from the network to the test machine, if a firewall is enabled, and if so, is it blocking any ports, and which DNS servers the machine talks to.

Use the following commands to identify network services-related information. If a deprecated command is not available, install it from a yum repo or use the equivalent newer command:

netstat -tulpn
netstat -anp
lsof -i
ss
iptables -L -n
cat / etc / resolv.conf 8. Kernel

When doing systems testing, I find it helpful to know kernel-related information, such as the kernel version and which kernel modules are loaded. I also list any tunable kernel parameters and what they are set to and check the options used when booting the running kernel.

Use the following commands to identify this information:

uname -r
cat / proc / cmdline
lsmod
modinfo < module >
sysctl -a
cat / boot / grub2 / grub.cfg

[Jan 02, 2021] 10 shortcuts to master bash by Guest Contributor

06, 2025 | TechRepublic

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
    8
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Jan 02, 2021] How to convert from CentOS or Oracle Linux to RHEL

convert2rhel is an RPM package which contains a Python2.x script written in completely incomprehensible over-modulazed manner. Python obscurantism in action ;-)
Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
Jan 02, 2021 | access.redhat.com

[Jan 02, 2021] Linux sysadmin basics- Start NIC at boot

Nov 14, 2019 | www.redhat.com

If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.

Posted: | (Red Hat)

Image
"Fast Ethernet PCI Network Interface Card SN5100TX.jpg" by Jana.Wiki is licensed under CC BY-SA 3.0

It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:

Image
Setting the NIC to the ON position during installation.
More Linux resources

But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.

If your NIC isn't enabled at startup, you have to edit the /etc/sysconfig/network-scripts/ifcfg-NIC_name file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with the ip addr command:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

Note that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:

$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

and change the ONBOOT="no" entry to ONBOOT="yes" as shown below:

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="77cb083f-2ad3-42e2-9070-697cb24edf94"
DEVICE="enp0s3"
ONBOOT="yes"

Save and exit the file.

You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.

To enable the NIC, use the ifup command:

ifup enp0s3

Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

Now the ip addr command displays the enp0s3 device with an IP address:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
       valid_lft 86266sec preferred_lft 86266sec
    inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute 
       valid_lft 3467sec preferred_lft 3467sec
    inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

To disable a NIC, use the ifdown command. Please note that issuing this command from a remote system will terminate your session:

ifdown enp0s3

Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

That's a wrap

It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the ifup command to see if that's your easy fix.

Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name

[Jan 02, 2021] Looking forward to Linux network configuration in the initial ramdisk (initrd)

Nov 24, 2020 | www.redhat.com
The need for an initrd

When you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an init process.

This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.

First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.

[ You might also like: Linux networking: 13 uses for netstat ]

To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.

The dracut infrastructure

On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in /usr/lib/dracut/modules.d ."

A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.

Kernel command-line

Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the root=dhcp argument.

The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .

The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example, 192.168.50.1:/nfs/client ). Dracut then mounts the NFS share at this location and proceeds with the boot.

If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:

root=nfs:192.168.50.1:/nfs/client ip=192.168.50.101:::24::ens2:none

Here, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.

There are two syntaxes to specify network configuration for an interface:

ip=<interface>:{dhcp|on|any|dhcp6|auto6}[:[<mtu>][:<macaddr>]]

ip=<client-IP>:[<peer>]:<gateway-IP>:<netmask>:<client_hostname>:<interface>:{none|off|dhcp|on|any|dhcp6|auto6|ibft}[:[<mtu>][:<macaddr>]]

The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:

ip=enp1s0:dhcp
ip=192.168.10.30::192.168.10.1:24::enp1s0:none
ip=[2001:0db8::02]::[2001:0db8::01]:64::enp1s0:none

Note that if you pass an ip= option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, add rd.neednet=1 to the command line.

You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing ip=ibft to the kernel, the network configuration is retrieved from the firmware.

Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.

Network modules

The dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command dracut --list-modules .

At the moment, there are two modules to configure the network: network-legacy and network-manager . You might wonder why different modules provide the same functionality.

network-legacy is older and uses shell scripts calling utilities like iproute2 , dhclient , and arping to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what the network-legacy module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.

A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.

To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.

The network-manager dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.

Fedora 31 was the first distribution to switch to network-manager in initrd by default. On RHEL 8.2, network-legacy is still the default, but network-manager is available. On RHEL 8.3, dracut will use network-manager by default.

Enabling a different network module

While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the nm-initrd-generator man page. In general, it is suggested to use the network-manager module when NetworkManager is enabled.

To rebuild the initrd using a specific network module, use one of the following commands:

# dracut --add network-legacy  --force --verbose
# dracut --add network-manager --force --verbose

Since this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:

# echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf
# dracut --regenerate-all --force --verbose

The --regenerate-all option also rebuilds all the initramfs images for the kernel versions found on the system.

The network-manager dracut module

As with all dracut modules, the network-manager module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).

The first stage parses the kernel command-line by calling /usr/libexec/nm-initrd-generator to produce a list of connection profiles in /run/NetworkManager/system-connections . The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.

When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.

The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.

Note that the /run/NetworkManager directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.

Troubleshooting

If you have network issues in dracut, this section contains some suggestions for investigating the problem.

The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in the journal.

If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:

rd.break={cmdline|pre-udev|pre-trigger|initqueue|pre-mount|mount|pre-pivot|cleanup}

The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:

# dracut -f  --install "ping tcpdump"

and then optionally verify that they were included successfully:

# lsinitrd | grep "ping\|tcpdump"
Arguments: -f --install 'ping tcpdump'
-rwxr-xr-x   1 root     root        82960 May 18 10:26 usr/bin/ping
lrwxrwxrwx   1 root     root           11 May 29 20:35 usr/sbin/ping -> ../bin/ping
-rwxr-xr-x   1 root     root      1065224 May 29 20:35 usr/sbin/tcpdump
The generator

If you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.

The generator is installed in /usr/libexec/nm-initrd-generator and must be called with the list of kernel arguments after a double dash. The --stdout option prints the generated connections on standard output. Let's try to call the generator with a sample command line:

$ /usr/libexec/nm-initrd-generator --stdout -- \
          ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0

802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC
address

In this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:

$ /usr/libexec/nm-initrd-generator --stdout -- \
        ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0

*** Connection 'enp1s0' ***

[connection]
id=enp1s0
uuid=e1fac965-4319-4354-8ed2-39f7f6931966
type=ethernet
interface-name=enp1s0
multi-connect=1
permissions=

[ethernet]
cloned-mac-address=00:99:88:77:66:55
mac-address-blacklist=

[ipv4]
dns-search=
ignore-auto-dns=true
may-fail=false
method=auto

[ipv6]
addr-gen-mode=eui64
dns-search=
ignore-auto-dns=true
method=auto

[proxy]

Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.

[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]

Conclusion

The NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.

While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the configure-and-quit=initrd mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.

To completely drop the configure-and-quit=initrd mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.

With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run nm-cloud-setup in the initrd. The service, shipped in the NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.

[Jan 02, 2021] 11 Linux command line guides you shouldn't be without - Enable Sysadmin

Jan 02, 2021 | www.redhat.com

Here are some brief comments about each topic:

  1. How to use the Linux mtr command - The mtr (My Traceroute) command is a major improvement over the old traceroute and is one of my first go-to tools when troubleshooting network problems.
  2. Linux for beginners: 10 commands to get you started at the terminal - Everyone who works on the Linux CLI needs to know some basic commands for moving around the directory structure and exploring files and directories. This article covers those commands in a simple way that places them into a usable context for those of us new to the command line.
  3. Linux for beginners: 10 more commands for manipulating files - One of the most common tasks we all do, whether as a Sysadmin or a regular user, is to manage and manipulate files.
  4. More stupid Bash tricks: Variables, find, file descriptors, and remote operations - These tricks are actually quite smart, and if you want to learn the basics of Bash along with standard IO streams (STDIO), this is a good place to start.
  5. Getting started with systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the basics of systemctl – a powerful tool for managing systemd services and more.
  6. How to use the uniq command to process lists in Linux - Ever had a list in which items can appear multiple times where you only need to know which items appear in the list but not how many times?
  7. A beginner's guide to gawk - gawk is a command line tool that can be used for simple text processing in Bash and other scripts. It is also a powerful language in its own right.
  8. An introduction to the diff command - Sometimes it is important to know the difference.
  9. Looking forward to Linux network configuration in the initial ramdisk (initrd) - The initrd is a critical part of the very early boot process for Linux. Here is a look at what it is and how it works.
  10. Linux troubleshooting: Setting up a TCP listener with ncat - Network troubleshooting sometimes requires tracking specific network packets based on complex filter criteria or just determining whether a connection can be made.
  11. Hard links and soft links in Linux explained - The use cases for hard and soft links can overlap but it is how they differ that makes them both important – and cool.

[Jan 02, 2021] Reference file descriptors

Jan 02, 2021 | www.redhat.com

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Jan 01, 2021] Looks like potentially Oracle can pickup up to 65% of CentOS users

Jan 01, 2021 | forums.centos.org

What do you think of the recent Red Hat announcement about CentOS Linux/Stream?

I can use either CentOS Linux or Stream and it makes no difference to me
6
11%
I will switch reluctantly to CentOS Stream but I'd rather not
2
4%
I depend on CentOS Linux 8 and its stability and now I need a new alternative
10
19%
I love the idea of CentOS Stream and can't wait to use it
1
2%
I'm off to a different distribution before CentOS 8 sunsets at the end of 2021
13
24%
I feel completely betrayed by this decision and will avoid Red Hat solutions in future
22
41%
Total votes: 54

[Jan 01, 2021] Oracle Linux DTrace

Jan 01, 2021 | www.oracle.com

... DTrace gives the operational insights that have long been missing in the data center, such as memory consumption, CPU time or what specific function calls are being made.

Developers can learn about and experiment with DTrace on Oracle Linux by installing the appropriate RPMs:

[Jan 01, 2021] Oracle Linux vs. Red Hat Enterprise Linux by Jim Brull

Jan 05, 2019 | www.centroid.com

... ... ...

Here's what we found.

[Jan 01, 2021] Consider looking at openSUSE (still run out of Germany)

Jan 01, 2021 | www.reddit.com

If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on 2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to ~ 365) because this OS will shift from major-minor point updates to a streaming model at the end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL (Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try it out (the installer actually had a few features I liked better than the CentOS-7 installer). When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my employer's solution over to "OpenSUSE Leap".

Parting thoughts:

  1. openSUSE is run out of Germany. IMHO switching over to a European distro is similar to those database people who preferred MariaDB to MySQL when Oracle was still hoping that MySQL would die from neglect.

  2. Someone cracked off to me the other day that now that IBM is pulling strings at "Red Hat", that the company should be renamed "Blue Hat"

7 comments 47% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

general-noob 4 points · 3 days ago

I downloaded and tried it last week and was actually pretty impressed. I have only ever tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still impressed. I'd recommend people take a look.

servingwater 2 points · 3 days ago

I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW also is pretty sweet and if I were to use a rolling release, it would be among the top contenders.

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.
Of course afterwards you can disable it with the proper commands but it becomes a pain with YAST, as it seems YAST insists on being opened by root.

neilrieck 2 points · 2 days ago

Thanks for that "heads about" about root

gdhhorn 1 point · 2 days ago

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.

I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply deselected the option for the account used to be an administrator; this required me to set a different password for administrative purposes.

Perhaps I'm misunderstanding your comment.

servingwater 1 point · 2 days ago

I think you might.
My point is/was that if I select to choose my regular user to be admin, I don't expect for the system to create and activate a root account anyways and then just assign it my password.
I expect the root account to be disabled.

gdhhorn 2 points · 2 days ago

I didn't realize it made a user, 'root,' and auto generated a password. I'd always assumed if I said to make the user account admin, that was it.

TIL, thanks.

servingwater 1 point · 2 days ago

I was surprised, too. I was bit "shocked" when I realized, after the install, that I could login as root with my user password.
At the very least, IMHO, it should then still have you set the root password, even if you choose to make your user admin.
It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to give it a different password.
But other than that subjective issue I found OpenSUSE Leap a very solid distro.

[Jan 01, 2021] What about the big academic labs? (Fermilab, CERN, DESY, etc)

Jan 01, 2021 | www.reddit.com

The big academic labs (Fermilab, CERN and DESY to only name three of many used to run something called Scientific Linux which was also maintained by Red Hat.see: https://scientificlinux.org/ and https://en.wikipedia.org/wiki/Scientific_Linux Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat) 11 comments 67% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

phil_g 14 points · 2 days ago

To clarify, as a user of Scientific Linux:

Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly a community distribution, Scientific Linux was an independent rebuild of the RHEL source code published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if you have issues with that, it's more up to you to fix them than the SL maintainers.)

I don't know anything about Red Hat convincing institutions to stop using Scientific Linux; the first I heard about the topic was in April 2019 when Fermilab announced there would be no Scientific Linux 8 . (They may reverse that decision. At the moment, they're " investigating the best path forward ", with a decision to be announced in the first few months of 2021.) level 2 neilrieck 4 points · 2 days ago

I fear you are correct. I just stumbled onto this article: https://www.linux.com/training-tutorials/scientific-linux-great-distro-wrong-name/ Even the wikipedia article states "This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them." But it does seem that Scientific Linux was created as a replacement for Fermilab Linux. I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page up to date because what I am seeing here ( https://linux.web.cern.ch/ ) today is not what I saw 2-weeks ago.

There are

Niarbeht 16 points · 2 days ago

There are

Uh oh, guys, they got him!

deja_geek 9 points · 2 days ago

RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they needed to (though I don't really think they needed to)

meat_bunny 10 points · 2 days ago

Maintaining your own distro is a pain in the ass.

My crystal ball says they'll just use whatever RHEL rebuild floats to the top in a few months like the rest of us.

carlwgeorge 2 points · 2 days ago

SL has always been an independent rebuild. It has never been maintained, sponsored, or owned by Red Hat. They decided on their own to not build 8 and instead collaborate on CentOS. They even gained representation on the CentOS board (one from Fermi, one from CERN).

I'm not affiliated with any of those organizations, but my guess is they will switch to some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).

VestoMSlipher 1 point · 11 hours ago

https://linux.web.cern.ch/#information-on-change-of-end-of-life-for-centos-8

[Jan 01, 2021] CentOS HAS BEEN CANCELLED !!!

Jan 01, 2021 | forums.centos.org

Re: CentOS HAS BEEN CANCELLED !!!

Post by whoop " 2020/12/08 20:00:36

Is anybody considering switching to RHEL's free non-production developer subscription? As I understand it, it is free and receives updates.
The only downside as I understand it is that you have to renew your license every year (and that you can't use it in commercial production).

[Jan 01, 2021] package management - yum distro-sync

Jan 01, 2021 | askubuntu.com

In redhat-based distros, the yum tool has a distro-sync command which will synchronize packages to the current repositories. This command is useful for returning to a base state if base packages have been modified from an outside source. The docs for the command is:

distribution-synchronization or distro-sync Synchronizes the installed package set with the latest packages available, this is done by either obsoleting, upgrading or downgrading as appropriate. This will "normally" do the same thing as the upgrade command however if you have the package FOO installed at version 4, and the latest available is only version 3, then this command will downgrade FOO to version 3.

[Dec 30, 2020] Switching from CentOS to Oracle Linux: a hands-on example

In view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for CentOS, the script can probably be slightly enhanced.
The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case it was invoked the second time after the failure of the step "Installing base packages for Oracle Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital packages ;-).
And failures on this step are probably the most common type of failures in conversion. Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running the script the second time.
It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the function remove_repos, because in their excessive zeal to restore the system after error the programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably should be blocked if Oracle repositories are already present. This check is absent.
After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of the deleted packages are in yum log, so you can reinstall them and something it helps. In other cases system remain unbootable and the restore from the backup is the only option.
Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics. The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure to write a new kernel to it due to the lack of free space leads to the situation of "half-baked" installation, which is difficult to recover without senior sysadmin skills.
See additional considerations about how to enhance the script at http://www.softpanorama.org/Commercial_linuxes/Oracle_linux/conversion_of_centos_to_oracle_linux.shtml
Dec 15, 2020 Simon Coter Blog

... ... ...

We published a blog post earlier this week that explains why , but here is the TL;DR version:

For these reasons, we created a simple script to allow users to switch from CentOS to Oracle Linux about five years ago. This week, we moved the script to GitHub to allow members of the CentOS community to help us improve and extend the script to cover more CentOS respins and use cases.

The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux. Let's take a look at just how simple the process is.

Download the centos2ol.sh script from GitHub

The simplest way to get the script is to use curl :

$ curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241

If you have git installed, you could clone the git repository from GitHub instead.

Run the centos2ol.sh script to switch to Oracle Linux

To switch to Oracle Linux, just run the script as root using sudo :

$ sudo bash centos2ol.sh

Sample output of script run .

As part of the process, the default kernel is switched to the latest release of Oracle's Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible Kernel (RHCK) which may be required by any specific hardware or application that has imposed strict kernel version restrictions.

Switching the default kernel (optional)

Once the switch is complete, but before rebooting, the default kernel can be changed back to the RHCK. First, use grubby to list all installed kernels:

[demo@c8switch ~]$ sudo grubby --info=ALL | grep ^kernel
[sudo] password for demo:
kernel="/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
kernel="/boot/vmlinuz-4.18.0-193.el8.x86_64"
kernel="/boot/vmlinuz-0-rescue-0dbb9b2f3c2744779c72a28071755366"

In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by the switch process while the third one is the kernel that were installed by CentOS and the final entry is the rescue kernel.

Next, use grubby to verify that UEK is currently the default boot option:

[demo@c8switch ~]$ sudo grubby --default-kernel
/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64

To replace the default kernel, you need to specify either the path to its vmlinuz file or its index. Use grubby to get that information for the replacement:

[demo@c8switch ~]$ sudo grubby --info /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
index=1
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
args="ro crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet $tuned_params"
root="/dev/mapper/cl-root"
initrd="/boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img $tuned_initrd"
title="Oracle Linux Server (4.18.0-240.1.1.el8_3.x86_64) 8.3"
id="0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64"

Finally, use grubby to change the default kernel, either by providing the vmlinuz path:

[demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Or its index:

[demo@c8switch ~]$ sudo grubby --set-default-index 1
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Changing the default kernel can be done at any time, so we encourage you to take UEK for a spin before switching back.

It's easy to access, try it out.

For more information visit oracle.com/linux .

[Dec 30, 2020] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old content. Money greedy incompetents is what current upper IBM managers really is...
Jul 20, 2008 | benincosa.com

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time -- and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset . But wait you say, typing reset is too close to typing reboot or shutdown . Your palms start to sweat -- especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo ."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel , and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1

Then press Enter , B , and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door . To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4 .)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5 , ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L , which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb ; 1024Mb/8 = 128MB ; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk , grep , and sed . There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped ( | d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1 . This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8 , you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system -- a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool :

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

[Dec 30, 2020] HPE ClearOS

Dec 30, 2020 | arstechnica.com

The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's in-house distro, ClearOS . Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line, and the company offers a free Community version to all comers.

ClearOS is an open source software platform that leverages the open source model to deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the integration of free open source technologies making it easier to use. By not charging for open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for the products and services they need and value.

ClearOS is mostly notable here for its association with industry giant HPE and its availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a bit outdated compared with other options, it also appears to be a rolling release itself -- more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.

ClearOS is probably most interesting to small business types who might consider buying ProLiant servers with RHEL-compatible OEM Linux pre-installed later.

[Dec 30, 2020] Where do I go now that CentOS Linux is gone- Check our list - Ars Technica

Dec 30, 2020 | arstechnica.com

Springdale Linux

I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor of CentOS. However, Springdale Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for the academic scientific community. Unlike Scientific Linux, it's still actively maintained!

Springdale Linux is maintained and made available by Princeton and Rutgers universities, who use it for their HPC projects. It has been around for quite a long time. One Springdale Linux user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS -- Princeton University Institute for Advanced Study) as a 10-year ride.

Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good seat-of-the-pants overview in a recent mailing list discussion :

The School of Mathematics at the Institute for Advanced Study has been using Springdale (formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations (yes, workstations) are running Springdale. On the server side, everything "just works", as is expected from a RHEL clone. On the workstation side, most of the issues we run into have to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc), but most issues have been resolved or have a workaround in place.

... Springdale is a community project, and [it] mostly comes down to the hours (mostly Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some are like us and use the whole thing. Others use a different OS and use Springdale just for its computational repositories.

Springdale Linux should be a natural fit for universities and scientists looking for a CentOS replacement. It will likely work for most anyone who needs it -- but its relatively small community and firm roots in academia will probably make it the most comfortable for those with similar needs and environments.

[Dec 30, 2020] GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

Dec 30, 2020 | distrowatch.com

64"best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United States)
@62 dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

[Dec 30, 2020] Scientific Linux website states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

Dec 30, 2020 | distrowatch.com

Centos (by David on 2020-12-22 04:29:46 GMT from United States)
I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!

Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501, the graphics were garbage --the screen displayed kind of a color mosaic --and the keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from minimal dvd produced a bunch of errors and then froze part way through.

I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

[Dec 30, 2020] Springdale vs. CentOS

Dec 30, 2020 | distrowatch.com

52Springdale vs. CentOS (by whoKnows on 2020-12-23 05:39:01 GMT from Switzerland)

@51 • Personal opinion only. (by R. Cain)

"Personal opinion only. [...] After all the years of using Linux, and experiencing first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which has all the earmarks of [...] being developed AND MAINTAINED by a professional organization."

Yeah, your answer is exactly what I expected it to be.

The thing with Springdale is as following: it's maintained by the very professional team of IT specialists at the Institute for Advanced Study (Princeton University) for the own needs. That's why there's no fancy website, RHEL Wiki, live ISOs and such.

They also maintain several other repositories for add-on packages (computing, unsupported [with audio/video codecs] ...).

With other words, if you're a professional who needs an RHEL clone, you'll be fine with it; if you're a hobbyist who needs a how-to on everything and anything, you can still use the knowledge base of RHEL/CentOS/Oracle ...

If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS, Springdale is not a commercial distribution selling you support and schooling. Springdale is made by professional and for the professionals.

https://www.ias.edu/math/computing/Springdale-Linux
https://researchcomputing.princeton.edu/faq/what-is-a-cluster

[Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux

Highly recommended!
Google translation
Notable quotes:
"... Free to use, free to download, free to update. Always ..."
"... Unbreakable Enterprise Kernel ..."
"... (What You Get Is What You Get ..."
Dec 30, 2020 | blog.microlinux.fr

In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I had just published Linux on the onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99% like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their future Linux administrators.

All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS project, etc. I didn't care any more, since CentOS perfectly met all my server needs.

Following the disastrous announcement of the CentOS project, I had a discussion with my compatriot Michael Kofler, a Linux guru who has published a series of excellent books on our favorite operating system, and who has migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the University of Graz. We were not in our first discussion on this subject, as the CentOS project was already accumulating a series of rather worrying delays for version 8 updates. In comparison, Oracle Linux does not suffer from these structural problems, so I kept this option in a corner of my head.

A problematic reputation

Oracle suffers from a problematic reputation within the free software community, for a variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his unhinged support for Donald Trump. As for the company's commercial policy, it has been marked by a notorious aggressiveness in the hunt for patents.

On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise since 2006, and which is also a free and free operating system.

Install Oracle Linux

For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure is identical to that of RHEL and CentOS, with a few small details.

Oracle Linux Installation

Info Normally, I never care about banner ads that scroll through graphic installers. This time, the slogan Free to use, free to download, free to update. Always still caught my attention.

An indestructible kernel?

Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux 7.9.

$ uname -a
Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48 
PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Well-crafted packet deposits

At first glance, the organization of official and semi-official package filings seems much clearer and better organized than under CentOS. For details, I refer you to the respective explanatory pages for the 7.x and 8.x versions.

Well-structured documentation

Like the organization of deposits, Oracle Linux's documentation is worth mentioning here, because it is simply exemplary. The main index refers to the different versions of Oracle Linux, and from there, you can access a whole series of documents in HTML and PDF formats that explain in detail the peculiarities of the system and its day-to-day management. As I go along with this documentation, I discover a multitude of pleasant little details, such as the fact that Oracle packages display metadata for security updates, which is not the case for CentOS packages.

Migrating from CentOS to Oracle Linux

The Switch your CentOS systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a better choice than CentOS when you want to have a company-grade free as in free beer operating system, which provides low-risk updates for each version over a decade. This page also features a script that transforms an existing CentOS system into a two-command Oracle Linux system on the fly. centos2ol.sh

So I tested this script on a CentOS 7 server from Online/Scaleway.

# curl -O https://linux.oracle.com/switch/centos2ol.sh
# chmod +x centos2ol.sh
# ./centos2ol.sh

The script grinds about twenty minutes, we restart the machine and we end up with a clean Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.

# rm -f /etc/yum.repos.d/*.repo.deactivated
Migrating a CentOS 8.x server?

At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9. On a whim, I sent an email to the address at the bottom of the page, asking if support for CentOS 8.x was expected in the near future. centos2ol.sh

A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle Linux 8.x.

So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.

# yum install git
# git clone https://github.com/oracle/centos2ol.git
# cd centos2ol/
# chmod +x centos2ol.sh
# ./centos2ol.sh

Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a public machine running oracle Linux 8.

Conclusion

I will probably have a lot more to say about that. For my part, I find this first experience with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will probably solve a common problem to a lot of admins of production servers who do not support their system becoming a moving target overnight.

Post Scriptum for the chilly purists

Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained by Princeton University in the United States according to the principle WYGIWYG (What You Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but it works just as well.


Writing this documentation takes time and significant amounts of espresso coffee. Do you like this blog? Give the editor a coffee by clicking on the cup.

[Dec 29, 2020] Oracle Linux is "CentOS done right"

Notable quotes:
"... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
"... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
"... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
Dec 21, 2020 | distrowatch.com

Microlinux

And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)

If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject:

https://blog.microlinux.fr/migration-centos-oracle-linux/

Currently Rocky Linux is not much more than a README file on Github and a handful of Slack (ew!) discussion channels.

Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers.

dragonmouth

9@Jesse on CentOS: (by dragonmouth on 2020-12-21 13:11:04 GMT from United States)

"There is no rush and I recommend waiting a bit for the dust to settle on the situation before leaping to an alternative. "

For private users there may be plenty of time to find an alternative. However, corporate IT departments are not like jet skis able to turn on a dime. They are more like supertankers or aircraft carriers that take miles to make a turn. By the time all the committees meet and come to some decision, by the time all the upper managers who don't know what the heck they are talking about expound their opinions and by the time the CentOS replacement is deployed, a year will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to start looking for the O/S that will replace CentOS.

Ricardo

"This looks like the vendor equivalent..." (by Ricardo on 2020-12-21 18:06:49 GMT from Argentina)

[Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on."

Jesse, I couldn't have articulated it better. I'm stealing that phrase :)

Cheers and happy holidays to everyone!

[Dec 28, 2020] Time to move to Oracle Linux

Dec 28, 2020 | www.cyberciti.biz
Kyle Dec 9, 2020 @ 2:13

It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply it to work.

I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20

As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui Dec 9, 2020 @ 4:49

Hello CentOS users,

I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide. Holmes Dec 9, 2020 @ 5:06

Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like "EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08

For me it's simple.
I will keep my openSUSE Leap and expand it's footprint.
Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the RHEL kernel.
OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications as well. Someone Dec 9, 2020 @ 5:23

While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24

That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're now a happy and successful FreeBSD shop.

[Dec 28, 2020] This quick and dirty hack worked fine to convert centos 8 to oracle linux 8

Notable quotes:
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 28, 2020 | blog.centos.org

Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco

Highly recommended!
" People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary compatibility to RHEL 8 and security updates until 2029."
One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
Notable quotes:
"... availability gap ..."
"... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
"... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
"... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
"... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
"... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
"... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
"... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
"... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
"... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
Dec 28, 2020 | blog.centos.org

Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company with an open source development methodology.

To do so Red Hat made a hard decision and in 2003 split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized, and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a single project. After that, each distribution flourished within its intended audiences.

But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving, stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap -- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability gap, but they were closed to contributions to the core Linux distro itself.

In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from. At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates in Fedora led to breakage in these layered projects.

We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project core team, Red Hat and the CentOS Project agreed to " join forces ." We said joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever before -- a partial solution to the openness gap.

Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.

Over the past six years since finally joining forces, we made good progress on those goals. We started Special Interest Groups (SIGs) to manage the layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a governance structure where there hadn't been one before. We brought RHEL source code to be housed at git.centos.org . We designed and built out a significant public build infrastructure and CI/CD system in a project that had previously been sealed-boxes all the way down.


cmdrlinux says: December 19, 2020 at 2:36 pm

"This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."

Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.

Mark Danon says: December 19, 2020 at 4:14 pm

Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them do the work? Be thankful for a distribution that you have been using so far, and move on.

Dave says: December 20, 2020 at 7:16 am

We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community.

If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle expectations for CentOS 7.

Konstantin says: December 21, 2020 at 12:24 am

Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people actually knowing the intrinsics of how CentOS Linux was being developed.

If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.

No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use beta testers (CentOS Stream *is* beta, whatever you say).

I predict you will be somewhat amazed at the actual results.

Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?

Chris Mair says: December 20, 2020 at 3:21 pm

To: centos-devel@centos.org
To: centos-questions@redhat.com

Hi,

Re: https://blog.centos.org/2020/12/balancing-the-needs-around-the-centos-platform/

you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving the EOL of CentOS Linux 8 from 2029 to 2021.

You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions.

Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later.

The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the changes.

I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.

What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information, how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?

And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview of what's next in RHEL".

I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I can trust to get updates until the given EOL date.

You've made me look elsewhere for that.

-- Chris

Chip says: December 20, 2020 at 6:16 pm

I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch. Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.

Alex says: December 21, 2020 at 12:51 am

I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.

Nika jous says: December 21, 2020 at 1:43 pm

Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.

Ang says: December 22, 2020 at 2:36 am

IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say you are reading them, but choose to ignore it and that is even worse!

People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first place?

Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no, you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.

Why didn't you even bother asking the community first and come to a compromise or something?

Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet! So stop pretending and address it properly!

Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent in support of becoming a contributor-focused open source project"

And you FAILED! Where was the transparency?!

Ang says: December 22, 2020 at 2:36 am

A link to the issue: https://git.centos.org/centos/board/issue/1

AP says: December 22, 2020 at 6:55 am

What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.

Len Inkster says: December 22, 2020 at 4:13 pm

As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH?

Anyone who wants to continue with CENTOS? Fork the project and maintain it yourselves. That how we got to CENTOS from Linus Torvalds original Linux.

Peter says: December 22, 2020 at 5:36 pm

I can only comment this as disappointment, if not betrayal, to whole CentOS user base. This decision was clearly done, without considering impact to majority of CentOS community use cases.

If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream. Clear and simple. All other 'explanations' are cover ups for real purpose of this action.

This stinks of politics within IBM/RH meddling with CentOS. I hope, Rocky will bring the desired stability, that community was relying on with CentOS.

Goodbye CentOS, it was nice 15 years.

Ken Sanderson says: December 23, 2020 at 1:57 pm

We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.

[Dec 28, 2020] Red Hat Goes Full IBM and Says Farewell to CentOS - ServeTheHome

Dec 28, 2020 | www.servethehome.com

The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...

[Dec 28, 2020] Call our sales people and open your wallet if you use CentOS in prod

Dec 28, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Red Hat expects you to call their sales people and open your wallet if you use CentOS in production. That will not happen.

Dec 27, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Why Red Hat dumped CentOS for CentOS Stream by Steven J. Vaughan-Nichols

Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So "Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less licesses, not more by adopting "One IBM way"
Dec 21, 2020 | www.zdnet.com

On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."

A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."

Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.

... ... ...

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

... ... ...

[Dec 27, 2020] There are now countless Internet servers out there that run CentOS. This is why the Debian project is so important.

Dec 27, 2020 | freedomben.medium.com

There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.

Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.

This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.

[Dec 23, 2020] Red Hat and GPL: uneasy romance ended long ego, but Red Hat still depends on GPL as it does not develop many components and gets them for free from the community and other vendors

It all about money and about executive bonuses: shortsighted executive want more and more money as if the current huge revenue is not enough...
Dec 23, 2020 | www.zdnet.com

former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:

CentOS Linux is NOT supported in any way by Red Hat, Inc.

CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.

CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.

CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.

We don't owe you anything."

[Dec 23, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

Sep 03, 2017 | www.poftut.com

Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

Syntax
patch [options] [originalfile [patchfile]] 
 
 
patch -pnum <patchfile
Help
$ patch --help
Help
Help
Create Patch File

Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

myapp_old.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
  
}
myapp.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
 
printf("This is new line as a patch"); 
  
}

Now we will create a patch file named myapp.patch .

$ diff -u myapp_old.c myapp.c > myapp.patch
Create Patch File
Create Patch File

We can print myapp.patch file with following command

$ cat myapp.patch
Apply Patch File

Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

$ patch < myapp.patch
Apply Patch File
Apply Patch File
Take Backup Before Applying Patch

One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

$ patch -b < myapp.patch
Take Backup Before Applying Patch
Take Backup Before Applying Patch

The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

Set Backup File Version

While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

$ patch -b -V numbered < myapp.patch
Set Backup File Version
Set Backup File Version

As we can see from screenshot the new backup file is named as number like myapp.c.~1~

Validate Patch File Without Applying or Dry run

We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

$ patch --dry-run < myapp.patch
Reverse Patch

Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

$ patch -R myapp_old.c < myapp.patch
Reverse Patch
Reverse Patch

As we can see that new changes are reverted back.

LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
  1. David K Hill 07/11/2019 at 4:15 am

    Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

    The ability to reverse the patch was most helpful!

    Reply
  2. Javed 28/12/2019 at 7:16 pm

    very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

[Dec 23, 2020] HowTo Apply a Patch File To My Linux

Dec 23, 2020 | www.cyberciti.biz

A note about working on an entire source tree

First, make a copy of the source tree:
## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
$ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile

Finally, create a patch with the following command:
$ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
patch -p1
See the man page of patch and other command for more information and usage - bash(1)

[Dec 22, 2020] HowTo Apply a Patch File To My Linux - UNIX Source Code - nixCraft

Dec 22, 2020 | www.cyberciti.biz

A note about working on an entire source tree

First, make a copy of the source tree:
## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
$ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile

Finally, create a patch with the following command:
$ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
patch -p1
See the man page of patch and other command for more information and usage - bash(1)

[Dec 22, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

Sep 03, 2017 | www.poftut.com

Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

Syntax
patch [options] [originalfile [patchfile]] 
 
 
patch -pnum <patchfile
Help
$ patch --help
Help
Help
Create Patch File

Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

myapp_old.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
  
}
myapp.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
 
printf("This is new line as a patch"); 
  
}

Now we will create a patch file named myapp.patch .

$ diff -u myapp_old.c myapp.c > myapp.patch
Create Patch File
Create Patch File

We can print myapp.patch file with following command

$ cat myapp.patch
Apply Patch File

Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

$ patch < myapp.patch
Apply Patch File
Apply Patch File
Take Backup Before Applying Patch

One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

$ patch -b < myapp.patch
Take Backup Before Applying Patch
Take Backup Before Applying Patch

The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

Set Backup File Version

While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

$ patch -b -V numbered < myapp.patch
Set Backup File Version
Set Backup File Version

As we can see from screenshot the new backup file is named as number like myapp.c.~1~

Validate Patch File Without Applying or Dry run

We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

$ patch --dry-run < myapp.patch
Reverse Patch

Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

$ patch -R myapp_old.c < myapp.patch
Reverse Patch
Reverse Patch

As we can see that new changes are reverted back.

LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
  1. David K Hill 07/11/2019 at 4:15 am

    Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

    The ability to reverse the patch was most helpful!

    Reply
  2. Javed 28/12/2019 at 7:16 pm

    very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

[Dec 10, 2020] Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

Dec 10, 2020 | blog.centos.org

Ward Mundy says: December 9, 2020 at 3:12 am

Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.

Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

[Dec 10, 2020] Does Oracle Linux have staying power against Red Hat

Notable quotes:
"... If you need official support, Oracle support is generally cheaper than RedHat. ..."
"... You can legally run OL free and have access to patches/repositories. ..."
"... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
"... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
"... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
Dec 10, 2020 | blog.centos.org

Matthew Stier says: December 8, 2020 at 8:11 pm

My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.

Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

art_ok 1 point· 5 minutes ago

We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.

Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.

I found Oracle Linux has several advantages over RedHat/CentOS:

If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.

Overall, it is pleasure to work and support OL.

Negatives:

I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).
art_ok 1 point· just now

Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).

[Dec 10, 2020] Backlash against Red Hat management started

At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you does not matter. And organized minority always beat disorganized majority. Actually most large organizations will probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool of corporate mind control over the user population. Feat is a powerful instrument of mind control.
In reality security of most systems on internal network does no change one bit with patches. And on external network only applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the whole world)
Notable quotes:
"... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
"... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
"... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
"... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
"... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
"... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
"... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
"... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
"... More likely to drive people entirely out of the RHEL ecosystem. ..."
"... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/ ..."
"... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
"... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
"... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
"... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
"... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
"... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
"... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
"... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
"... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
"... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
"... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
"... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
"... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
"... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
"... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
"... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
"... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
"... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
Dec 10, 2020 | blog.centos.org

Internet User says: December 8, 2020 at 5:13 pm

This is a pretty clear indication that you people are completely out of touch with your users.

Joel B. D. says: December 8, 2020 at 5:17 pm

Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?

"If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?

Michael says: December 8, 2020 at 8:31 pm

That's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.

OS says: December 8, 2020 at 6:20 pm

First CoreOS, now CentOS. It's about time to switch to one of the *BSDs.

JD says: December 8, 2020 at 6:35 pm

Wow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.

ShameOnIBM says: December 8, 2020 at 7:07 pm

IBM is declining, hence they need more profit from "useless" product line. So disgusting

MLF says: December 8, 2020 at 7:15 pm

An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?

MM says: December 8, 2020 at 7:28 pm

Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.

Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.

I say this without any official or unofficial involvement with the Debian project, other than being a user.

And we have just launched hundred of Centos 8 servers.

Faisal Sehbai says: December 8, 2020 at 7:32 pm

Another one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!

Disgusting!

William Smith says: December 8, 2020 at 7:39 pm

This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.

Daniele Brunengo says: December 8, 2020 at 7:48 pm

I just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.

David Potterveld says: December 8, 2020 at 8:08 pm

What a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.

a says: December 8, 2020 at 9:08 pm

From a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.

From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody

I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.

Ralf says: December 8, 2020 at 9:08 pm

Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

I'm a current user of old school CentOS, so keep your promise, Mr CTO.

Tamas says: December 8, 2020 at 10:01 pm

That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."

https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

Konstantin says: December 9, 2020 at 3:36 pm

From the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'

Read again: CentOS Stream is not a production operating system. 'Nuff said.

Samuel C. says: December 8, 2020 at 10:53 pm

This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.

Brendan says: December 9, 2020 at 12:15 am

Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.

vinci says: December 8, 2020 at 11:45 pm

I can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...

Peter Vonway says: December 8, 2020 at 11:56 pm

What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.

Scott says: December 9, 2020 at 8:38 am

This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.

Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.

They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.

OSLover says: December 9, 2020 at 12:09 am

So sad. Not only breaking the support promise but so quickly (2021!)

Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.

Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.

What a sad way to end 2020.

technick says: December 9, 2020 at 12:09 am

I don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.

RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?

ConcernedAdmin says: December 9, 2020 at 12:47 am

Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.

John says: December 9, 2020 at 1:32 am

Assinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.

Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me

Concerned Fren says: December 9, 2020 at 1:52 am

In hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.

This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.

Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.

William Ashford says: December 9, 2020 at 2:02 am

So the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.

Christian Reiss says: December 9, 2020 at 6:28 am

This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.

For the software (CentOS) I hope for a quick death that will not drag on for years.

Ian says: December 9, 2020 at 2:10 am

This is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.

There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.

The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.

cody says: December 9, 2020 at 4:53 am

Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.

Ganesan Rajagopal says: December 9, 2020 at 5:09 am

Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!

Bomel says: December 9, 2020 at 6:22 am

I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.

Steve says: December 9, 2020 at 8:57 am

IBM are seeing every CentOS install as a missed RHEL subscription...

Ralf says: December 9, 2020 at 10:29 am

Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?

Michel-André says: December 9, 2020 at 5:18 pm

Hi all,

Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !

Mihel-André

PeteVM says: December 9, 2020 at 5:27 pm

As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.

CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.

JadeK says: December 9, 2020 at 6:36 pm

I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.

Congrats fellas, you did it. You perfected the transition to Debian from CentOS.

Godimir Kroczweck says: December 9, 2020 at 8:21 pm

I find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..

Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?

I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.

Paul R says: December 9, 2020 at 9:14 pm

Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM

Nicholas Knight says: December 9, 2020 at 9:34 pm

Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.

[Dec 10, 2020] GPL bites Red hat in the butt: they might faceemarge of CentOs alternative due to the wave of support for such distro

Dec 10, 2020 | blog.centos.org

Sam Callis says: December 8, 2020 at 3:58 pm

I have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.

And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.

Sieciowski says: December 9, 2020 at 11:19 am

well, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?

If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?

Joe says: December 9, 2020 at 11:47 am

That's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.

Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pm

You are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.

Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?

Matt Phelps says: December 8, 2020 at 4:12 pm

This is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.

Move this approach to CentOS 9

fahrradflucht says: December 8, 2020 at 5:37 pm

This! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.

Gregory Kurtzer says: December 8, 2020 at 4:27 pm

I am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website hpcng.org).

Greg (original founder of CentOS)

Reply
A says: December 8, 2020 at 7:11 pm

Not a programmer, but I'd certainly use it. I hope you get it off the ground.

Michael says: December 8, 2020 at 8:26 pm

This sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?

Bond Masuda says: December 8, 2020 at 11:53 pm

Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.

Rex says: December 9, 2020 at 3:46 am

Dear Greg,

Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.

The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.

dovla091 says: December 9, 2020 at 10:47 am

Wikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money

dan says: December 9, 2020 at 4:00 am

Please keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.

Chad Gregory says: December 9, 2020 at 7:21 pm

Not sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.

Vasile M says: December 8, 2020 at 8:43 pm

LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.

[Dec 10, 2020] Amazon Linux 2

Dec 10, 2020 | aws.amazon.com

Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.

Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.

[Dec 10, 2020] A letter to IBM brass

Notable quotes:
"... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
"... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
"... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
"... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
"... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
"... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
"... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
Dec 10, 2020 | blog.centos.org

Orsiris de Jong says: December 9, 2020 at 9:41 am

Dear IBM,

As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.

Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.

Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.

The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.

Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.

Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.

Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.

This will make RHEL less and less widely used by startups, enthusiasts and others.

CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.

Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.

As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.

Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.

This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.

I think you destroyed a large part of the RHEL / CentOS community with this move today.

Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.

... ... ...

[Dec 10, 2020] CentOS will be RHEL's beta, but CentOS denies this

IBM have a history of taking over companies and turning them into junk, so I am not that surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat acquisition.
Notable quotes:
"... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
"... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
Dec 10, 2020 | www.zdnet.com

I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.

Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024

[Dec 10, 2020] Time to bring back Scientific Linux

Notable quotes:
"... I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. ..."
Dec 10, 2020 | www.reddit.com

I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX-ANNOUNCE;11d6001.1904 l

clickwir 19 points· 1 day ago

Time to bring back Scientific Linux.

[Dec 10, 2020] CentOS Project: Embraced, extended, extinguished.

Notable quotes:
"... My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that. ..."
Dec 10, 2020 | www.reddit.com

KugelKurt 18 points· 1 day ago

I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.

The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.

I don't think a rolling release distribution is what BMD wants.

My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.

[Dec 10, 2020] Oracle Linux -- A better alternative to CentOS

Currently limited of CentOS 6 and CentOS7.
Dec 10, 2020 | linux.oracle.com
Oracle Linux: A better alternative to CentOS

We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

We think you'll like what you find, and we'd love for you to give it a try.

FAQ

Wait, doesn't Oracle Linux cost money?
Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at yum.oracle.com . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
Is this just another CentOS?
Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
How is this better than CentOS?
Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:


Delay in kernel security advisories since January 2018: CentOS vs Oracle Linux; CentOS has large fluctuations in delays

So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.

What about the code quality?
Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
What if I want support?
If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.

Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.

Why are you doing this?
This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.

At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.

How do I make the switch?
Run the following as root:

curl -O https://linux.oracle.com/switch/centos2ol.sh
sh centos2ol.sh

What versions of CentOS can I switch?
centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
What does the script do?
The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
Is it safe?
The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing oraclelinux-info_ww_grp@oracle.com .

[Dec 10, 2020] The demise of CentOs and independent training providers

Dec 10, 2020 | blog.centos.org

Anthony Mwai

says: December 8, 2020 at 8:44 pm

IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.

We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.

Joe says: December 9, 2020 at 1:03 pm

You can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.

Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from yum.oracle.com

TechSmurf says: December 9, 2020 at 12:38 am

Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.

Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.

I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.

David Anderson says: December 8, 2020 at 7:16 pm

"OEL is literally a rebranded RH."

So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - https://www.oracle.com/ar/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf - "Oracle Linux is easy to download and completely free to use, distribute, and update."

Bill Murmor says: December 9, 2020 at 5:04 pm

So, what's the problem?

IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.

k1 says: December 9, 2020 at 7:58 pm

Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.

[Dec 10, 2020] Oracle has a converter script for CentOS 7. And here is a quick hack to convert CentOs8 to Oracle Linux

You can use Oracle Linux exactly like CentOS, only better
Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more trustworthy than RedHat/IBM. But I guess such things do happen with time..."
Notable quotes:
"... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 10, 2020 | blog.centos.org

Charlie F. says: December 8, 2020 at 6:37 pm

Oracle has a converter script for CentOS 7, and they will sell you OS support after you run it:

https://linux.oracle.com/switch/centos/

It would be nice if Oracle would update that for CentOS 8.

David Anderson says: December 8, 2020 at 7:15 pm

The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.

Max Grü says: December 9, 2020 at 2:05 pm

Oracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."

Reply
Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 10, 2020] Linux Subshells for Beginners With Examples - LinuxConfig.org

Dec 10, 2020 | linuxconfig.org

Bash allows two different subshell syntaxes, namely $() and back-tick surrounded statements. Let's look at some easy examples to start:

$ echo '$(echo 'a')'
$(echo a)
$ echo "$(echo 'a')"
a
$ echo "a$(echo 'b')c"
abc
$ echo "a`echo 'b'`c"
abc

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=1566064928&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710648&bpp=17&bdt=1664&idt=-M&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&correlator=5015138629854&frm=20&pv=2&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=577243598424319&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=1663&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530671%2C21068083&oid=3&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1Bdmecc8YW&p=https%3A//linuxconfig.org&dtd=634


In the first command, as an example, we used ' single quotes. This resulted in our subshell command, inside the single quotes, to be interpreted as literal text instead of a command. This is standard Bash: ' indicates literal, " indicates that the string will be parsed for subshells and variables.

In the second command we swap the ' to " and thus the string is parsed for actual commands and variables. The result is that a subshell is being started, thanks to our subshell syntax ( $() ), and the command inside the subshell ( echo 'a' ) is being executed literally, and thus an a is produced, which is then inserted in the overarching / top level echo . The command at that stage can be read as echo "a" and thus the output is a .

In the third command, we further expand this to make it clearer how subshells work in-context. We echo the letter b inside the subshell, and this is joined on the left and the right by the letters a and c yielding the overall output to be abc in a similar fashion to the second command.

In the fourth and last command, we exemplify the alternative Bash subshell syntax of using back-ticks instead of $() . It is important to know that $() is the preferred syntax, and that in some remote cases the back-tick based syntax may yield some parsing errors where the $() does not. I would thus strongly encourage you to always use the $() syntax for subshells, and this is also what we will be using in the following examples.

Example 2: A little more complex
$ touch a
$ echo "-$(ls [a-z])"
-a
$ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-"
-=-||-rw-rw-r-- 1 roel roel 0 Sep  5 09:26 a||-=-

Here, we first create an empty file by using the touch a command. Subsequently, we use echo to output something which our subshell $(ls [a-z]) will generate. Sure, we can execute the ls directly and yield more or less the same result, but note how we are adding - to the output as a prefix.

In the final command, we insert some characters at the front and end of the echo command which makes the output look a bit nicer. We use a subshell to first find the a file we created earlier ( ls [a-z] ) and then - still inside the subshell - pass the results of this command (which would be only a literally - i.e. the file we created in the first command) to the ls -l using the pipe ( | ) and the xargs command. For more information on xargs, please see our articles xargs for beginners with examples and multi threaded xargs with examples .

Example 3: Double quotes inside subshells and sub-subshells!
echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')"
it surely works

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=2724449972&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&adsid=ChEIgM2S_gUQzN_42M_QwuOnARIqAL1WgU0IPKMTPLYMrAFnUAY1w18hzIzNy0CGR82uXn3xCpt9jLaEISQY&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710249&bpp=9&bdt=1264&idt=211&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&prev_fmts=754x189%2C0x0&nras=1&correlator=5015138629854&frm=20&pv=1&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=2308974393696511&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=3548&biw=1519&bih=762&scr_x=0&scr_y=513&eid=42530671%2C21068083&oid=3&psts=AGkb-H_kCAb-qHdw3GuwXq6RB3MJbClRq9VISu7n8l1rpQZCm8sfL6sdfh-BMTltKIaB0w&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&jar=2020-11-30-05&ifi=2&uci=a!2&btvi=2&fsb=1&xpc=0ozDx6KTCC&p=https%3A//linuxconfig.org&dtd=4517


Cool, no? Here we see that double quotes can be used inside the subshell without generating any parsing errors. We also see how a subshell can be nested inside another subshell. Are you able to parse the syntax? The easiest way is to start "in the middle or core of all subshells" which is in this case would be the simple echo "it works" .

This command will output it works as a result of the subshell call $(echo "it works") . Picture it works in place of the subshell, i.e.

echo "$(echo "it works" | sed 's|it|it surely|')"
it surely works

This looks simpler already. Next it is helpful to know that the sed command will do a substitute (thanks to the s command just before the | command separator) of the text it to it surely . You can read the sed command as replace __it__ with __it surely__. The output of the subshell will thus be it surely works`, i.e.

echo "it surely works"
it surely works
Conclusion

In this article, we have seen that subshells surely work (pun intended), and that they can be used in wide variety of circumstances, due to their ability to be inserted inline and within the context of the overarching command. Subshells are very powerful and once you start using them, well, there will likely be no stopping. Very soon you will be writing something like:

$ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'

This one is for you to try and play around with! Thank you and goodbye

[Dec 10, 2020] Top 10 Awesome Linux Screen Tricks by Isaias Irizarry

May 13, 2011 | blog.urfix.com

Screen or as I like to refer to it "Admin's little helper"
Screen is a window manager that multiplexes a physical terminal between several processes

here are a couple quick reasons you'd might use screen

Lets say you have a unreliable internet connection you can use screen and if you get knocked out from your current session you can always connect back to your session.

Or let's say you need more terminals, instead of opening a new terminal or a new tab just create a new terminal inside of screen

Here are the screen shortcuts to help you on your way Screen shortcuts

and here are some of the Top 10 Awesome Linux Screen tips urfix.com uses all the time if not daily.

1) Attach screen over ssh
ssh -t remote_host screen -r

Directly attach a remote screen session (saves a useless parent bash process)

2) Share a terminal screen with others
% screen -r someuser/
3) Triple monitoring in screen
tmpfile=$(mktemp) && echo -e 'startup_message off\nscreen -t top  htop\nsplit\nfocus\nscreen
-t nethogs nethogs  wlan0\nsplit\nfocus\nscreen -t iotop iotop' > $tmpfile &&
sudo screen -c $tmpfile

This command starts screen with 'htop', 'nethogs' and 'iotop' in split-screen. You have to have these three commands (of course) and specify the interface for nethogs – mine is wlan0, I could have acquired the interface from the default route extending the command but this way is simpler.

htop is a wonderful top replacement with many interactive commands and configuration options. nethogs is a program which tells which processes are using the most bandwidth. iotop tells which processes are using the most I/O.

The command creates a temporary "screenrc" file which it uses for doing the triple-monitoring. You can see several examples of screenrc files here: http://www.softpanorama.org/Utilities/Screen/screenrc_examples.shtml

4) Share a 'screen'-session
screen -x

Ater person A starts his screen-session with `screen`, person B can attach to the srceen of person A with `screen -x`. Good to know, if you need or give support from/to others.

5) Start screen in detached mode
screen -d -m [<command>]

Start screen in detached mode, i.e., already running on background. The command is optional, but what is the purpose on start a blank screen process that way?
It's useful when invoking from a script (I manage to run many wget downloads in parallel, for example).

6) Resume a detached screen session, resizing to fit the current terminal
screen -raAd.

By default, screen tries to restore its old window sizes when attaching to resizable terminals. This command is the command-line equivalent to typing ^A F to fit an open screen session to the window

7) use screen as a terminal emulator to connect to serial consoles
screen /dev/tty<device> 9600

Use GNU/screen as a terminal emulator for anything serial console related.

screen /dev/tty

eg.

screen /dev/ttyS0 9600

8) ssh and attach to a screen in one line.
ssh -t user@host screen -x <screen name>

If you know the benefits of screen, then this might come in handy for you. Instead of ssh'ing into a machine and then running a screen command, this can all be done on one line instead. Just have the person on the machine your ssh'ing into run something like
screen -S debug
Then you would run
ssh -t user@host screen -x debug
and be attached to the same screen session.

9) connect to all screen instances running
screen -ls | grep pts | gawk '{ split($1, x, "."); print x[1] }' | while read i; do gnome-terminal -e screen\ -dx\ $i; done

connects to all the screen instances running.

10) Quick enter into a single screen session
alias screenr='screen -r $(screen -ls | egrep -o -e '[0-9]+' | head -n 1)'

There you have 'em folks the top 10 screen commands. enjoy!

[Dec 10, 2020] Possibility to change only year or only month in date

Jan 01, 2017 | unix.stackexchange.com

Gilles

491k 109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69

> ,

Christian Severin , 2017-09-29 09:47:52

You can use e.g. date --set='-2 years' to set the clock back two years, leaving all other elements identical. You can change month and day of month the same way. I haven't checked what happens if that calculation results in a datetime that doesn't actually exist, e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both date and time to concrete values" behaviour. – Christian Severin Sep 29 '17 at 9:47

Michael Homer , 2014-08-22 09:44:23

Use date -s :
date -s '2014-12-25 12:34:56'

Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock.

To change only part of the time, you can use command substitution in the date string:

date -s "2014-12-25 $(date +%H:%M:%S)"

will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S .

> ,

> , 2014-08-22 09:51:41

I don't want to change the time – SHW Aug 22 '14 at 9:51

Michael Homer , 2014-08-22 09:55:00

There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)" to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at 9:55

chaos , 2014-08-22 09:59:58

System time

You can use date to set the system date. The GNU implementation of date (as found on most non-embedded Linux-based systems) accepts many different formats to set the time, here a few examples:

set only the year:

date -s 'next year'
date -s 'last year'

set only the month:

date -s 'last month'
date -s 'next month'

set only the day:

date -s 'next day'
date -s 'tomorrow'
date -s 'last day'
date -s 'yesterday'
date -s 'friday'

set all together:

date -s '2009-02-13 11:31:30' #that's a magical timestamp

Hardware time

Now the system time is set, but you may want to sync it with the hardware clock:

Use --show to print the hardware time:

hwclock --show

You can set the hardware clock to the current system time:

hwclock --systohc

Or the system time to the hardware clock

hwclock --hctosys

> ,

garethTheRed , 2014-08-22 09:57:11

You change the date with the date command. However, the command expects a full date as the argument:
# date -s "20141022 09:45"
Wed Oct 22 09:45:00 BST 2014

To change part of the date, output the current date with the date part that you want to change as a string and all others as date formatting variables. Then pass that to the date -s command to set it:

# date -s "$(date +'%Y12%d %H:%M')"
Mon Dec 22 10:55:03 GMT 2014

changes the month to the 12th month - December.

The date formats are:

Balmipour , 2016-03-23 09:10:21

For ones like me running ESXI 5.1, here's what the system answered me
~ # date -s "2016-03-23 09:56:00"
date: invalid date '2016-03-23 09:56:00'

I had to uses a specific ESX command instead :

esxcli system time set  -y 2016 -M 03 -d 23  -H 10 -m 05 -s 00

Hope it helps !

> ,

Brook Oldre , 2017-09-26 20:03:34

I used the date command and time format listed below to successfully set the date from the terminal shell command performed on Android Things which uses the Linux Kernal.

date 092615002017.00

MMDDHHMMYYYY.SS

MM - Month - 09

DD - Day - 26

HH - Hour - 15

MM - Min - 00

YYYY - Year - 2017

.SS - second - 00

> ,

[Dec 09, 2020] Is Oracle A Real Alternative To CentOS

Notable quotes:
"... massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html ..."
Dec 09, 2020 | centosfaq.org

Is Oracle A Real Alternative To CentOS? Home " CentOS " Is Oracle A Real Alternative To CentOS? December 8, 2020 Frank Cox CentOS 33 Comments

Is Oracle a real alternative to CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.

But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert CentOS 7 to Oracle. Nothing about CentOS 8 in that script, though.

https://linux.oracle.com/switch/ CentOS /

That page seems to say that Oracle Linux is everything that CentOS was prior to today's announcement.

But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.

So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?

Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.

But is it, really?

Karl Vogel says: December 9, 2020 at 3:05 am

... ... ..

Go to https://linux.oracle.com/switch/CentOS/ , poke around a bit, and you end up here:
https://yum.oracle.com/oracle-linux-downloads.html

I just went to the ISO page and I can grab whatever I like without signing up for anything, so nothing's changed since I first used it.

... ... ...

Gianluca Cecchi says: December 9, 2020 at 3:30 am

[snip]

Only to point out that while in CentOS (8.3, but the same in 7.x) the situation is like this:

[g.cecchi@skull8 ~]$ ll /etc/redhat-release /etc/CentOS-release
-rw-r–r– 1 root root 30 Nov 10 16:49 /etc/CentOS-release lrwxrwxrwx 1 root root 14 Nov 10 16:49 /etc/redhat-release -> CentOS-release
[g.cecchi@skull8 ~]$

[g.cecchi@skull8 ~]$ cat /etc/CentOS-release CentOS Linux release 8.3.2011

in Oracle Linux (eg 7.7) you get two different files:

$ ll /etc/redhat-release /etc/oracle-release 
-rw-r–r– 1 root root 32 Aug 8 2019 /etc/oracle-release 
-rw-r–r– 1 root root 52 Aug 8 2019 /etc/redhat-release 
$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ cat /etc/oracle-release Oracle Linux Server release 7.7 

This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:

# lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6 
Release: 7.6 
Codename: n/a 
# 

Gianluca

Rainer Traut says: December 9, 2020 at 4:18 am

Am 08.12.20 um 18:54 schrieb Frank Cox:

Yes, it is better than CentOS and in some aspects better than RHEL:

– faster security updates than CentOS, directly behind RHEl
– better kernels than RHEL and CentOS (UEKs) wih more features
– free to download (no subscription needed):
https://yum.oracle.com/oracle-linux-isos.html
– free to use:
https://yum.oracle.com/oracle-linux-8.html
massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html

Rainer Traut says: December 9, 2020 at 4:26 am

Hi,

Am 08.12.20 um 19:03 schrieb Jon Pruente:

KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell you to use the Oracle Cloud. There's other things, but that stood out immediately.

Oracle Linux FAQ (PDF): https://www.oracle.com/a/ocom/docs/027617.pdf

There is no subscription needed. All needed repositories for the oVirt based virtualization are freely available.

https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/getstart/manager-install.html#manager-install-prepare

Rainer Traut says: December 10, 2020 at 4:40 am

Am 09.12.20 um 17:52 schrieb Frank Cox:

I'll try to answer best to my knowledge.

I have an oracle account but never used it for/with Oracle linux. There are oracle communities where you need an oracle account: https://community.oracle.com/tech/apps-infra/categories/oracle_linux

Niki Kovacs says: December 10, 2020 at 10:22 am

Le 10/12/2020 à 17:18, Frank Cox a écrit :

That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle Linux system.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables
7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Ljubomir Ljubojevic says: December 10, 2020 at 12:53 pm

There is always Springdale Linux made by Princeton University: https://puias.math.ias.edu/

Johnny Hughes says: December 10, 2020 at 4:10 pm

Am 10.12.20 um 19:53 schrieb Ljubomir Ljubojevic:

I did a conversion of a test webserver from C8 to Springdale. It went smoothly.

Niki Kovacs says: December 12, 2020 at 11:29 am

Le 08/12/2020 à 18:54, Frank Cox a écrit :

I spent the last three days experimenting with it. Here's my take on it: https://blog.microlinux.fr/migration-CentOS-oracle-linux/

tl;dr: Very nice if you don't have any qualms about the company.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables 7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Frank Cox says: December 12, 2020 at 11:52 am

That's a really excellent article, Nicholas. Thanks ever so much for posting about your experience.

Peter Huebner says: December 15, 2020 at 5:07 am

Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gžibovskis:

According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license cost, no need for a contract, and no usage audits."

Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory requirement. see: https://www.oracle.com/linux

No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra commercial license. Same as in CentOS.

So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
--
Peter Huebner

[Nov 29, 2020] Provisioning a system

Nov 29, 2020 | opensource.com

We've gone over several things you can do with Ansible on your system, but we haven't yet discussed how to provision a system. Here's an example of provisioning a virtual machine (VM) with the OpenStack cloud solution.

- name: create a VM in openstack
osp_server:
name: cloudera-namenode
state: present
cloud: openstack
region_name: andromeda
image: 923569a-c777-4g52-t3y9-cxvhl86zx345
flavor_ram: 20146
flavor: big
auto_ip: yes
volumes: cloudera-namenode 

All OpenStack modules start with os , which makes it easier to find them. The above configuration uses the osp-server module, which lets you add or remove an instance. It includes the name of the VM, its state, its cloud options, and how it authenticates to the API. More information about cloud.yml is available in the OpenStack docs, but if you don't want to use cloud.yml, you can use a dictionary that lists your credentials using the auth option. If you want to delete the VM, just change state: to absent .

Say you have a list of servers you shut down because you couldn't figure out how to get the applications working, and you want to start them again. You can use os_server_action to restart them (or rebuild them if you want to start from scratch).

Here is an example that starts the server and tells the modules the name of the instance:

- name: restart some servers
os_server_action:
action: start
cloud: openstack
region_name: andromeda
server: cloudera-namenode 

Most OpenStack modules use similar options. Therefore, to rebuild the server, we can use the same options but change the action to rebuild and add the image we want it to use:

os_server_action:
action: rebuild
image: 923569a-c777-4g52-t3y9-cxvhl86zx345

[Nov 29, 2020] bootstrap.yml

Nov 29, 2020 | opensource.com

For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:

$ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -Kk

The script makes use of Ansible's raw module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).

The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:

---
- name : Set user accounts
hosts : all
gather_facts : false
become : yes
vars_prompt :
- name : passwd
prompt : "Enter the desired ansible password:"
private : yes

tasks :
- name : Add child 1 account
user :
state : present
name : child1
password : "{{ passwd | password_hash('sha512') }}"
comment : Child One
uid : 888
group : users
shell : /bin/bash
generate_ssh_key : yes
ssh_key_bits : 2048
update_password : always
create_home : yes

The vars_prompt section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.

The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:

- name : Install kids software
apt :
name : "{{ packages }}"
state : present
vars :
packages :
- lxde
- childsplay
- tuxpaint
- tuxtype
- pysycache
- pysiogame
- lmemory
- bouncy

I created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.

For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.

The moment of truth

A few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.

I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.

I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.

Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).

Conclusion

Taking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and -- thanks to Ansible's idempotency -- easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.

Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!

[Nov 25, 2020] What you need to know about Ansible modules by Jairo da Silva Junior

Mar 04, 2019 | opensource.com

Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled.

These modules are mapped to resources and their respective states , which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration.

With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it.

More on Ansible The contract for building modules is simple: JSON in the stdout . The configurations declared in YAML files are delivered over the network via SSH/WinRM -- or any other connection plugin -- as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules).

Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other.

Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:

Ansible's official docs are a good resource on developing plugins .

When should you develop a module?

Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific -- for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on developing modules .

IMPORTANT: Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the development list and/or existing working groups to see if a module exists or is in development.

Signs that you need a new module instead of using an existing one include:

In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML).

Identifying good and bad playbooks
"Make love, but don't make a shell script in YAML."

So, what makes a bad playbook?

- name : Read a remote resource
command : "curl -v http://xpto/resource/abc"
register : resource
changed_when : False

- name : Create a resource in case it does not exist
command : "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: *.* } }'"
when : "resource.stdout | 404"

# Leave it here in case I need to remove it hehehe
#- name: Remove resource
# command: "curl -X DELETE http://xpto/resource/abc"
# when: resource.stdout == 1

Aside from being very fragile -- what if the resource state includes a 404 somewhere? -- and demanding extra code to be idempotent, this playbook can't update the resource when its state changes.

Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state.

Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files.

Here's how to rewrite this example to follow infrastructure-as-code principles.

- name : XPTO
xpto :
name : abc
state : present
config :
client : xyz
url : http://beta
pattern : "*.*"

The benefits of this approach, based on custom modules , include:

Implementing a custom module

Let's use WildFly , an open source Java application server, as an example to introduce a custom module for our not-so-good playbook:

- name : Read datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'"
register : datasource

- name : Create datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'"
when : 'datasource.stdout | outcome => failed'

Problems:

A custom module for this would look like:

- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
driver-name : h2
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
jndi-name : "java:jboss/datasources/DemoDS"
user-name : sa
password : sa
min-pool-size : 20
max-pool-size : 40

This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state.

Why learn to build custom modules?

Good reasons to learn how to build custom modules include:

" abstractions save us time working, but they don't save us time learning." -- Joel Spolsky, The Law of Leaky Abstractions
Custom Ansible modules 101 The Ansible way An alternative: drop it in the library directory library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)

site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier

roles/
common/ # this hierarchy represents a "role"
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case

TIP: You can use this directory layout to overwrite existing modules if, for example, you need to patch a module.

First steps

You could do it in your own -- including using another language -- or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( exit_json() , fail_json() ) in the way Ansible expects ( msg , meta , has_changed , result ), and it's also easier to process the input ( params[] ) and log its execution ( log() , debug() ).

def main () :

arguments = dict ( name = dict ( required = True , type = 'str' ) ,
state = dict ( choices = [ 'present' , 'absent' ] , default = 'present' ) ,
config = dict ( required = False , type = 'dict' ))

module = AnsibleModule ( argument_spec = arguments , supports_check_mode = True )
try :
if module. check_mode :
# Do not do anything, only verifies current state and report it
module. exit_json ( changed = has_changed , meta = result , msg = 'Fez alguma coisa ou não...' )

if module. params [ 'state' ] == 'present' :
# Verify the presence of a resource
# Desired state `module.params['param_name'] is equal to the current state?
module. exit_json ( changed = has_changed , meta = result )

if module. params [ 'state' ] == 'absent' :
# Remove the resource in case it exists
module. exit_json ( changed = has_changed , meta = result )

except Error as err:
module. fail_json ( msg = str ( err ))

NOTES: The check_mode ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. Also, the module_utils directory can be used for shared code among different modules.

For the full Wildfly example, check this pull request .

Running tests The Ansible way

The Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, Shippable , which includes linting, unit tests, and integration tests.

For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code:

- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
...
register : result

- name : assert output message that datasource was created
assert :
that :
- "result.changed == true"
- "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" An alternative: bundling a module with your role

Here is a full example inside a simple role:

Molecule + Vagrant + pytest : molecule init (inside roles/)

It offers greater flexibility to choose:

But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about using Molecule .

tyberious 5 hours ago

They were trying to overcome the largest Electoral and Popular Vote Victory in American History!

It was akin to dousing a 4 alarm fire with a garden hose, eventually you will get burned! play_arrow 95 play_arrow pocomotion 5 hours ago

Tyberious, thanks for sharing your thoughts. I think you are correct is your assessment. play_arrow 15 play_arrow 1 systemsplanet 4 hours ago

I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

https://thedonald.win/p/11QRtZSsD4/

The game is rigged y_arrow 1 Stainmaker 4 hours ago

I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

Of course it doesn't matter when you have Lester Holt interviewing Joe Biden and asking whether Creepy Joe's administration is going to continue investigating Trump. Whatever happened to Hunter's laptop and the hundreds of millions in Russian, Ukrainian & Chinese bribes anyway? 7 play_arrow HelluvaEngineer 5 hours ago

So far, they are winning. Got an idea of a path to victory? I don't. Americans are fvcking stupid. play_arrow 24 play_arrow 1 tyberious 5 hours ago

They like to think they are!

Go here and follow!

https://www.thegatewaypundit.com/?ff_source=Email&ff_medium=the-gateway-pundit&ff_campaign=dailypm&ff_content=daily

[Nov 25, 2020] My top 5 Ansible modules by Mark Phillips

Nov 25, 2019 | opensource.com

5. authorized_key

Secure shell (SSH) is at the heart of Ansible, at least for almost everything besides Windows. Key (no pun intended) to using SSH efficiently with Ansible is keys ! Slight aside -- there are a lot of very cool things you can do for security with SSH keys. It's worth perusing the authorized_keys section of the sshd manual page . Managing SSH keys can become laborious if you're getting into the realms of granular user access, and although we could do it with either of my next two favourites, I prefer to use the module because it enables easy management through variables .

4. file

Besides the obvious function of placing a file somewhere, the file module also sets ownership and permissions. I'd say that's a lot of bang for your buck with one module. I'd proffer a substantial portion of security relates to setting permissions too, so the file module plays nicely with authorized_keys .

3. template

There are so many ways to manipulate the contents of files, and I see lots of folk use lineinfile . I've used it myself for small tasks. However, the template module is so much clearer because you maintain the entire file for context. My preference is to write Ansible content in such a way that anyone can understand it easily -- which to me means not making it hard to understand what is happening. Use of template means being able to see the entire file you're putting into place, complete with the variables you are using to change pieces.

2. uri

Many modules in the current distribution leverage Ansible as an orchestrator. They talk to another service, rather than doing something specific like putting a file into place. Usually, that talking is over HTTP too. In the days before many of these modules existed, you could program an API directly using the uri module. It's a powerful access tool, enabling you to do a lot. I wouldn't be without it in my fictitious Ansible shed.

1. shell

The joker card in our pack. The Swiss Army Knife. If you're absolutely stuck for how to control something else, use shell . Some will argue we're now talking about making Ansible a Bash script -- but, I would say it's still better because with the use of the name parameter in your plays and roles, you document every step. To me, that's as big a bonus as anything. Back in the days when I was still consulting, I once helped a database administrator (DBA) migrate to Ansible. The DBA wasn't one for change and pushed back at changing working methods. So, to ease into the Ansible way, we called some existing DB management scripts from Ansible using the shell module. With an informative name statement to accompany the task.

You can ac hieve a lot with these five modules. Yes, modules designed to do a specific task will make your life even easier. But with a smidgen of engineering simplicity, you can achieve a lot with very little. Ansible developer Brian Coca is a master at it, and his tips and tricks talk is always worth a watch.

[Nov 25, 2020] 10 Ansible modules for Linux system automation by Ricardo Gerardi

Nov 25, 2020 | opensource.com

10 Ansible modules for Linux system automation These handy modules save time and hassle by automating many of your daily tasks, and they're easy to implement with a few commands. 26 Oct 2020 Ricardo Gerardi (Red Hat) Feed 69 up 3 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

Ansible is a complete automation solution for your IT environment. You can use Ansible to automate Linux and Windows server configuration, orchestrate service provisioning, deploy cloud environments, and even configure your network devices.

Ansible modules abstract actions on your system so you don't need to worry about implementation details. You simply describe the desired state, and Ansible ensures the target system matches it.

This module availability is one of Ansible's main benefits, and it is often referred to as Ansible having "batteries included." Indeed, you can find modules for a great number of tasks, and while this is great, I frequently hear from beginners that they don't know where to start.

Although your choice of modules will depend exclusively on your requirements and what you're trying to automate with Ansible, here are the top ten modules you need to get started with Ansible for Linux system automation.

1. copy

The copy module allows you to copy a file from the Ansible control node to the target hosts. In addition to copying the file, it allows you to set ownership, permissions, and SELinux labels to the destination file. Here's an example of using the copy module to copy a "message of the day" configuration file to the target hosts:

- name: Ensure MOTD file is in place
copy:
src: files / motd
dest: / etc / motd
owner: root
group: root
mode: 0644

For less complex content, you can copy the content directly to the destination file without having a local file, like this:

- name: Ensure MOTD file is in place
copy:
content: "Welcome to this system."
dest: / etc / motd
owner: root
group: root
mode: 0644

This module works idempotently , which means it will only copy the file if the same file is not already in place with the same content and permissions.

The copy module is a great option to copy a small number of files with static content. If you need to copy a large number of files, take a look at the synchronize module. To copy files with dynamic content, take a look at the template module next.

2. template

The template module works similarly to the copy module, but it processes content dynamically using the Jinja2 templating language before copying it to the target hosts.

For example, define a "message of the day" template that displays the target system name, like this:

$ vi templates / motd.j2
Welcome to {{ inventory_hostname }} .

Then, instantiate this template using the template module, like this:

- name: Ensure MOTD file is in place
template:
src: templates / motd.j2
dest: / etc / motd
owner: root
group: root
mode: 0644

Before copying the file, Ansible processes the template and interpolates the variable, replacing it with the target host system name. For example, if the target system name is rh8-vm03 , the result file is:

Welcome to rh8-vm03.

While the copy module can also interpolate variables when using the content parameter, the template module allows additional flexibility by creating template files, which enable you to define more complex content, including for loops, if conditions, and more. For a complete reference, check Jinja2 documentation .

This module is also idempotent, and it will not copy the file if the content on the target system already matches the template's content.

3. user

The user module allows you to create and manage Linux users in your target system. This module has many different parameters, but in its most basic form, you can use it to create a new user.

For example, to create the user ricardo with UID 2001, part of the groups users and wheel , and password mypassword , apply the user module with these parameters:

- name: Ensure user ricardo exists
user:
name: ricardo
group: users
groups: wheel
uid: 2001
password: "{{ 'mypassword' | password_hash('sha512') }}"
state: present

Notice that this module tries to be idempotent, but it cannot guarantee that for all its options. For instance, if you execute the previous module example again, it will reset the password to the defined value, changing the user in the system for every execution. To make this example idempotent, use the parameter update_password: on_create , ensuring Ansible only sets the password when creating the user and not on subsequent runs.

You can also use this module to delete a user by setting the parameter state: absent .

The user module has many options for you to manage multiple user aspects. Make sure you take a look at the module documentation for more information.

4. package

The package module allows you to install, update, or remove software packages from your target system using the operating system standard package manager.

For example, to install the Apache web server on a Red Hat Linux machine, apply the module like this:

- name: Ensure Apache package is installed
package:
name: httpd
state: present More on Ansible This module is distribution agnostic, and it works by using the underlying package manager, such as yum/dnf for Red Hat-based distributions and apt for Debian. Because of that, it only does basic tasks like install and remove packages. If you need more control over the package manager options, use the specific module for the target distribution.

Also, keep in mind that, even though the module itself works on different distributions, the package name for each can be different. For instance, in Red Hat-based distribution, the Apache web server package name is httpd , while in Debian, it is apache2 . Ensure your playbooks deal with that.

This module is idempotent, and it will not act if the current system state matches the desired state.

5. service

Use the service module to manage the target system services using the required init system; for example, systemd .

In its most basic form, all you have to do is provide the service name and the desired state. For instance, to start the sshd service, use the module like this:

- name: Ensure SSHD is started
service:
name: sshd
state: started

You can also ensure the service starts automatically when the target system boots up by providing the parameter enabled: yes .

As with the package module, the service module is flexible and works across different distributions. If you need fine-tuning over the specific target init system, use the corresponding module; for example, the module systemd .

Similar to the other modules you've seen so far, the service module is also idempotent.

6. firewalld

Use the firewalld module to control the system firewall with the firewalld daemon on systems that support it, such as Red Hat-based distributions.

For example, to open the HTTP service on port 80, use it like this:

- name: Ensure port 80 ( http ) is open
firewalld:
service: http
state: enabled
permanent: yes
immediate: yes

You can also specify custom ports instead of service names with the port parameter. In this case, make sure to specify the protocol as well. For example, to open TCP port 3000, use this:

- name: Ensure port 3000 / TCP is open
firewalld:
port: 3000 / tcp
state: enabled
permanent: yes
immediate: yes

You can also use this module to control other firewalld aspects like zones or complex rules. Make sure to check the module's documentation for a comprehensive list of options.

7. file

The file module allows you to control the state of files and directories -- setting permissions, ownership, and SELinux labels.

For instance, use the file module to create a directory /app owned by the user ricardo , with read, write, and execute permissions for the owner and the group users :

- name: Ensure directory / app exists
file:
path: / app
state: directory
owner: ricardo
group: users
mode: 0770

You can also use this module to set file properties on directories recursively by using the parameter recurse: yes or delete files and directories with the parameter state: absent .

This module works with idempotency for most of its parameters, but some of them may make it change the target path every time. Check the documentation for more details.

8. lineinfile

The lineinfile module allows you to manage single lines on existing files. It's useful to update targeted configuration on existing files without changing the rest of the file or copying the entire configuration file.

For example, add a new entry to your hosts file like this:

- name: Ensure host rh8-vm03 in hosts file
lineinfile:
path: / etc / hosts
line: 192.168.122.236 rh8-vm03
state: present

You can also use this module to change an existing line by applying the parameter regexp to look for an existing line to replace. For example, update the sshd_config file to prevent root login by modifying the line PermitRootLogin yes to PermitRootLogin no :

- name: Ensure root cannot login via ssh
lineinfile:
path: / etc / ssh / sshd_config
regexp: '^PermitRootLogin'
line: PermitRootLogin no
state: present

Note: Use the service module to restart the SSHD service to enable this change.

This module is also idempotent, but, in case of line modification, ensure the regular expression matches both the original and updated states to avoid unnecessary changes.

9. unarchive

Use the unarchive module to extract the contents of archive files such as tar or zip files. By default, it copies the archive file from the control node to the target machine before extracting it. Change this behavior by providing the parameter remote_src: yes .

For example, extract the contents of a .tar.gz file that has already been downloaded to the target host with this syntax:

- name: Extract contents of app.tar.gz
unarchive:
src: / tmp / app.tar.gz
dest: / app
remote_src: yes

Some archive technologies require additional packages to be available on the target system; for example, the package unzip to extract .zip files.

Depending on the archive format used, this module may or may not work idempotently. To prevent unnecessary changes, you can use the parameter creates to specify a file or directory that this module would create when extracting the archive contents. If this file or directory already exists, the module does not extract the contents again.

10. command

The command module is a flexible one that allows you to execute arbitrary commands on the target system. Using this module, you can do almost anything on the target system as long as there's a command for it.

Even though the command module is flexible and powerful, it should be used with caution. Avoid using the command module to execute a task if there's another appropriate module available for that. For example, you could create users by using the command module to execute the useradd command, but you should use the user module instead, as it abstracts many details away from you, taking care of corner cases and ensuring the configuration only changes when necessary.

For cases where no modules are available, or to run custom scripts or programs, the command module is still a great resource. For instance, use this module to run a script that is already present in the target machine:

- name: Run the app installer
command: "/app/install.sh"

By default, this module is not idempotent, as Ansible executes the command every single time. To make the command module idempotent, you can use when conditions to only execute the command if the appropriate condition exists, or the creates argument, similarly to the unarchive module example.

What's next?

Using these modules, you can configure entire Linux systems by copying, templating, or modifying configuration files, creating users, installing packages, starting system services, updating the firewall, and more.

If you are new to Ansible, make sure you check the documentation on how to create playbooks to combine these modules to automate your system. Some of these tasks require running with elevated privileges to work. For more details, check the privilege escalation documentation.

As of Ansible 2.10, modules are organized in collections. Most of the modules in this list are part of the ansible.builtin collection and are available by default with Ansible, but some of them are part of other collections. For a list of collections, check the Ansible documentation . What you need to know about Ansible modules Learn how and when to develop custom modules for Ansible.

[Nov 22, 2020] Programmable editor as sysadmin tool

Highly recommended!
Oct 05, 2020 | perlmonks.org

likbez

C( vi/vim, emacs, THE, etc ).

There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

FixHostsEquiv() {

if   -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
      t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
      /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG

        if grep -s "^+$" /etc/hosts.equiv
        then
        ed - /etc/hosts.equiv <<- !
        g/^+$/d
        w
        q
        !
        fi
else
        t_echo 2 "        No /etc/hosts.equiv -  PASSES CHECK"
        exit 1
fi

For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

On the other hand, that gives you a chance to learn splice function ;-)

If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

[Nov 22, 2020] Which programming languages are useful for sysadmins, by Jonathan Roemer

I am surprised that Perl is No.3. It should be no.1 as it is definitely superior to both shell and Python for the most sysadmin scripts and it has more commonality with bash (which remain the major language) than Python. Far more.
It looks like Python as the language taught at the universities dominate because number of weak sysadmin, who just mentions it but actually do not used it, exceed the number of strong sysadmins (who really wrote at least one complex sysadmin script) by several orders of magnitude.
Jul 24, 2020 | www.redhat.com
What&#039;s your favorite programming/scripting language for sysadmin work?

Life as a systems engineer is a process of continuous improvement. In the past few years, as software-defined-everything has started to overhaul how we work in the same way virtualization did, knowing how to write and debug software has been a critical skill for systems engineers. Whether you are automating a small, repetitive, manual task, writing daily reporting tools, or debugging a production outage, it is vital to choose the right tool for the job. Below, are a few programming languages that I think all systems engineers will find useful, and also some guidance for picking your next language to learn.

Bash

The old standby, Bash (and, to a certain extent, POSIX sh) is the go-to for many systems engineers. The quick access to system primitives makes it ideal for ad-hoc data transformations. Slap together curl and jq with some conditionals, and you've got everything from a basic health check to an automated daily reporting tool. However, once you get a few levels of iteration deep, or you're making multiple calls to jq , you probably want to pull out a more fully-featured programming language.

Python

Python's easy onboarding, wide range of libraries, and large community make it ideal for more demanding sysadmin tasks. Daily reports might start as a few hundred lines of Bash that are run first thing in the morning. Once this gets large enough, however, it makes sense to move this to Python. A quick import json for simple JSON object interaction, and import jinja2 for quickly templating out a daily HTML-formatted email.

The languages your tools are built in

One of the powers of open source is, of course, access to the source! However, it is hard to realize this value if you don't have an understanding of the languages these tools are built in. An understanding of Go makes digging into the Datadog or Kubernetes codebases much easier. Being familiar with the development and debugging tools for C and Perl allow you to quickly dig down into aberrant behavior.

The new hotness

Even if you don't have Go or Rust in your environment today, there's a good chance you'll start seeing these languages more often. Maybe your application developers are migrating over to Elixir. Keeping up with the evolution of our industry can frequently feel like a treadmill, but this can be mitigated somewhat by getting ahead of changes inside of your organization. Keep an ear to the ground and start learning languages before you need them, so you're always prepared.

[ Download now: A sysadmin's guide to Bash scripting . ]

[Nov 22, 2020] Read a file line by line

Jul 07, 2020 | www.redhat.com

Assume I have a file with a lot of IP addresses and want to operate on those IP addresses. For example, I want to run dig to retrieve reverse-DNS information for the IP addresses listed in the file. I also want to skip IP addresses that start with a comment (# or hashtag).

I'll use fileA as an example. Its contents are:

10.10.12.13  some ip in dc1
10.10.12.14  another ip in dc2
#10.10.12.15 not used IP
10.10.12.16  another IP

I could copy and paste each IP address, and then run dig manually:

$> dig +short -x 10.10.12.13

Or I could do this:

$> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfile

What if I want to swap the columns in fileA? For example, I want to put IP addresses in the right-most column so that fileA looks like this:

some ip in dc1 10.10.12.13
another ip in dc2 10.10.12.14
not used IP #10.10.12.15
another IP 10.10.12.16

I run:

$> while  read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA

[Nov 22, 2020] Save terminal output to a file under Linux or Unix bash

Apr 19, 2020 | www.cyberciti.biz
PayPal / Bitcoin , or become a supporter using Patreon . Advertisements

[Nov 22, 2020] Top 7 Linux File Compression and Archive Tools

Notable quotes:
"... It's currently support 188 file extensions. ..."
Nov 22, 2020 | www.2daygeek.com

6) How to Use the zstd Command to Compress and Decompress File on Linux

Zstandard command stands for zstd, it is a real-time lossless data compression algorithm that provides high compression rates.

It was created by Yann Collet on Facebook. It offers a wide range of options for compression and decompression.

It also provides a special mode for small data known as dictionary summary.

To compress the file using the zstd command.

# zstd [Files To Be Compressed] -o [FileName.zst]

To decompress the file using the zstd command.

# zstd -d [FileName.zst]

To decompress the file using the unzstd command.

# unzstd [FileName.zst]
7) How to Use the PeaZip Command to Compress and Decompress Files on Linux

PeaZip is a free and open-source file archive utility, based on Open Source technologies of 7-Zip, p7zip, FreeArc, PAQ, and PEA projects.

It's Cross-platform, full-featured and user-friendly alternative to WinRar and WinZip archive manager applications.

It supports its native PEA archive format (featuring volume spanning, compression and authenticated encryption).

It was developed for Windows and later added support for Unix/Linux as well. It's currently support 188 file extensions.

[Nov 18, 2020] Why the lone wolf mentality is a sysadmin mistake by Scott McBrien

Jul 10, 2019 | www.redhat.com

If you have worked in system administration for a while, you've probably run into a system administrator who doesn't write anything down and keeps their work a closely-guarded secret. When I've run into administrators like this, I often ask why they do this, and the response is usually a joking, "Job security." Which, may not actually be all that joking.

Don't be that person. I've worked in several shops, and I have yet to see someone "work themselves out of a job." What I have seen, however, is someone that can't take a week off without being called by the team repeatedly. Or, after this person left, I have seen a team struggle to detangle the mystery of what that person was doing, or how they were managing systems under their control.

[Nov 04, 2020] Utility dirhist -- History of changes in one or several directories was posted on GitHub

Designed to run from cron. Uses different, simpler approach than the etckeeper. Does not use GIT or any other version control system as they proved to be of questionable utility , unless there are multiple sysadmins on the server.

Designed to run from cron. Uses different, simpler approach than the etckeeper (and does not have the connected with the usage of GIT problem with incorrect assignment of file attributes when reconverting system files).

If it detects changed file it creates a new tar file for each analyzed directory. For example /etc, /root, and /boot

Detects all "critical" changed file, diffs them with previous version, and produces report.

All information by default in stored in /var/Dirhist_base. Directories to watch and files that are considered important are configurable via two config files dirhist_ignore.lst and dirhist_watch.lst which by default are located at the root of the /var/Dirhist_base tree ( as /var/Dirhist_base/dirhist_ignore.lst and /var/Dirhist_base/dirhist_watch.lst )

You can specify any number of watched directories and within each directory any number of watched files and subdirectories. The format used is similar to YAML dictionaries, or Windows 3 ini files. If any of "watched" files or directories changes, the utility can email you the report to selected email addresses, to alert about those changes. Useful when several sysadmin manage the same server. Can also be used for checking, if changes made were documented in GIT or other version management system (this process can be automated using the utility admpolice.)

[Nov 02, 2020] The Pros and Cons of Ansible - UpGuard

Nov 02, 2020 | www.upguard.com

Ansible has no notion of state. Since it doesn't keep track of dependencies, the tool simply executes a sequential series of tasks, stopping when it finishes, fails or encounters an error . For some, this simplistic mode of automation is desirable; however, many prefer their automation tool to maintain an extensive catalog for ordering (à la Puppet), allowing them to reach a defined state regardless of any variance in environmental conditions.

[Nov 02, 2020] YAML for beginners - Enable Sysadmin

Nov 02, 2020 | www.redhat.com

YAML Ain't a Markup Language (YAML), and as configuration formats go, it's easy on the eyes. It has an intuitive visual structure, and its logic is pretty simple: indented bullet points inherit properties of parent bullet points.

But this apparent simplicity can be deceptive.

Great DevOps Downloads

It's easy (and misleading) to think of YAML as just a list of related values, no more complex than a shopping list. There is a heading and some items beneath it. The items below the heading relate directly to it, right? Well, you can test this theory by writing a little bit of valid YAML.

Open a text editor and enter this text, retaining the dashes at the top of the file and the leading spaces for the last two items:

---
Store: Bakery
  Sourdough loaf
  Bagels

Save the file as example.yaml (or similar).

If you don't already have yamllint installed, install it:

$ sudo dnf install -y yamllint

A linter is an application that verifies the syntax of a file. The yamllint command is a great way to ensure your YAML is valid before you hand it over to whatever application you're writing YAML for (Ansible, for instance).

Use yamllint to validate your YAML file:

$ yamllint --strict shop.yaml || echo "Fail"
$

But when converted to JSON with a simple converter script , the data structure of this simple YAML becomes clearer:

$ ~/bin/json2yaml.py shop.yaml
{"Store": "Bakery Sourdough loaf Bagels"}

Parsed without the visual context of line breaks and indentation, the actual scope of your data looks a lot different. The data is mostly flat, almost devoid of hierarchy. There's no indication that the sourdough loaf and bagels are children of the name of the store.

[ Readers also liked: Ansible: IT automation for everybody ]

How data is stored in YAML

YAML can contain different kinds of data blocks:

There's a third type called scalar , which is arbitrary data (encoded in Unicode) such as strings, integers, dates, and so on. In practice, these are the words and numbers you type when building mapping and sequence blocks, so you won't think about these any more than you ponder the words of your native tongue.

When constructing YAML, it might help to think of YAML as either a sequence of sequences or a map of maps, but not both.

YAML mapping blocks

When you start a YAML file with a mapping statement, YAML expects a series of mappings. A mapping block in YAML doesn't close until it's resolved, and a new mapping block is explicitly created. A new block can only be created either by increasing the indentation level (in which case, the new block exists inside the previous block) or by resolving the previous mapping and starting an adjacent mapping block.

The reason the original YAML example in this article fails to produce data with a hierarchy is that it's actually only one data block: the key Store has a single value of Bakery Sourdough loaf Bagels . YAML ignores the whitespace because no new mapping block has been started.

Is it possible to fix the example YAML by prepending each sequence item with a dash and space?

---
Store: Bakery
  - Sourdough loaf
  - Bagels

Again, this is valid YAML, but it's still pretty flat:

$ ~/bin/json2yaml.py shop.yaml
{"Store": "Bakery - Sourdough loaf - Bagels"}

The problem is that this YAML file opens a mapping block and never closes it. To close the Store block and open a new one, you must start a new mapping. The value of the mapping can be a sequence, but you need a key first.

Here's the correct (and expanded) resolution:

---
Store:
  Bakery:
    - 'Sourdough loaf'
    - 'Bagels'
  Cheesemonger:
    - 'Blue cheese'
    - 'Feta'

In JSON, this resolves to:

{"Store": {"Bakery": ["Sourdough loaf", "Bagels"],
"Cheesemonger": ["Blue cheese", "Feta"]}}

As you can see, this YAML directive contains one mapping ( Store ) to two child values ( Bakery and Cheesemonger ), each of which is mapped to a child sequence.

YAML sequence blocks

The same principles hold true should you start a YAML directive as a sequence. For instance, this YAML directive is valid:

Flour
Water
Salt

Each item is distinct when viewed as JSON:

["Flour", "Water", "Salt"]

But this YAML file is not valid because it attempts to start a mapping block at an adjacent level to a sequence block :

---
- Flour
- Water
- Salt
Sugar: caster

It can be repaired by moving the mapping block into the sequence:

---
- Flour
- Water
- Salt
- Sugar: caster

You can, as always, embed a sequence into your mapping item:

---
- Flour
- Water
- Salt
- Sugar:
    - caster
    - granulated
    - icing

Viewed through the lens of explicit JSON scoping, that YAML snippet reads like this:

["Flour", "Salt", "Water", {"Sugar": ["caster", "granulated", "icing"]}]

[ A free guide from Red Hat: 5 steps to automate your business . ]

YAML syntax

If you want to comfortably write YAML, it's vital to be aware of its data structure. As you can tell, there's not much you have to remember. You know about mapping and sequence blocks, so you know everything you need have to work with. All that's left is to remember how they do and do not interact with one another. Happy coding! Check out these related articles on Enable Sysadmin Image 10 YAML tips for people who hate YAML

[Nov 02, 2020] Deconstructing an Ansible playbook by Peter Gervase

Oct 21, 2020 | www.redhat.com

This article describes the different parts of an Ansible playbook starting with a very broad overview of what Ansible is and how you can use it. Ansible is a way to use easy-to-read YAML syntax to write playbooks that can automate tasks for you. These playbooks can range from very simple to very complex and one playbook can even be embedded in another.

More about automation Installing httpd with a playbook

Now that you have that base knowledge let's look at a basic playbook that will install the httpd package. I have an inventory file with two hosts specified, and I placed them in the web group:

[root@ansible test]# cat inventory
[web]
ansibleclient.usersys.redhat.com
ansibleclient2.usersys.redhat.com

Let's look at the actual playbook to see what it contains:

[root@ansible test]# cat httpd.yml
---
- name: this playbook will install httpd
  hosts: web
  tasks:
    - name: this is the task to install httpd
      yum:
        name: httpd
        state: latest

Breaking this down, you see that the first line in the playbook is --- . This lets you know that it is the beginning of the playbook. Next, I gave a name for the play. This is just a simple playbook with only one play, but a more complex playbook can contain multiple plays. Next, I specify the hosts that I want to target. In this case, I am selecting the web group, but I could have specified either ansibleclient.usersys.redhat.com or ansibleclient2.usersys.redhat.com instead if I didn't want to target both systems. The next line tells Ansible that you're going to get into the tasks that do the actual work. In this case, my playbook has only one task, but you can have multiple tasks if you want. Here I specify that I'm going to install the httpd package. The next line says that I'm going to use the yum module. I then tell it the name of the package, httpd , and that I want the latest version to be installed.

[ Readers also liked: Getting started with Ansible ]

When I run the httpd.yml playbook twice, I get this on the terminal:

[root@ansible test]# ansible-playbook httpd.yml

PLAY [this playbook will install httpd] ************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]

TASK [this is the task to install httpd] ***********************************************************************************************************
changed: [ansibleclient2.usersys.redhat.com]
changed: [ansibleclient.usersys.redhat.com]

PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansibleclient2.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
[root@ansible test]# ansible-playbook httpd.yml

PLAY [this playbook will install httpd] ************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]

TASK [this is the task to install httpd] ***********************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]

PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansibleclient2.usersys.redhat.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@ansible test]#

Note that in both cases, I received an ok=2 , but in the second run of the playbook, nothing was changed. The latest version of httpd was already installed at that point.

To get information about the various modules you can use in a playbook, you can use the ansible-doc command. For example:

[root@ansible test]# ansible-doc yum
> YUM    (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/yum.py)
Installs, upgrade, downgrades, removes, and lists packages and groups with the `yum' package manager. This module only works on Python 2. If you require Python 3 support, see the [dnf] module.

  * This module is maintained by The Ansible Core Team
  * note: This module has a corresponding action plugin.
< output truncated >

It's nice to have a playbook that installs httpd , but to make it more flexible, you can use variables instead of hardcoding the package as httpd . To do that, you could use a playbook like this one:

[root@ansible test]# cat httpd.yml
---
- name: this playbook will install {{ myrpm }}
  hosts: web
  vars:
    myrpm: httpd
  tasks:
    - name: this is the task to install {{ myrpm }}
      yum:
        name: "{{ myrpm }}"
        state: latest

Here you can see that I've added a section called "vars" and I declared a variable myrpm with the value of httpd . I then can use that myrpm variable in the playbook and adjust it to whatever I want to install. Also, because I've specified the RPM to install by using a variable, I can override what I have written in the playbook by specifying the variable on the command line by using -e :

[root@ansible test]# cat httpd.yml
---
- name: this playbook will install {{ myrpm }}
  hosts: web
  vars:
    myrpm: httpd
  tasks:
    - name: this is the task to install {{ myrpm }}
      yum:
        name: "{{ myrpm }}"
        state: latest
[root@ansible test]# ansible-playbook httpd.yml -e "myrpm=at"

PLAY [this playbook will install at] ***************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]

TASK [this is the task to install at] **************************************************************************************************************
changed: [ansibleclient2.usersys.redhat.com]
changed: [ansibleclient.usersys.redhat.com]

PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansibleclient2.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[root@ansible test]#

Another way to make the tasks more dynamic is to use loops . In this snippet, you can see that I have declared rpms as a list to have mailx and postfix . To use them, I use loop in my task:

 vars:
    rpms:
      - mailx
      - postfix

  tasks:
    - name: this will install the rpms
      yum:
        name: "{{ item }}"
        state: installed
      loop: "{{ rpms }}"

You might have noticed that when these plays run, facts about the hosts are gathered:

TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]


These facts can be used as variables when you run the play. For example, you could have a motd.yml file that sets content like:

"This is the system {{ ansible_facts['fqdn'] }}.
This is a {{ ansible_facts['distribution'] }} version {{ ansible_facts['distribution_version'] }} system."

For any system where you run that playbook, the correct fully-qualified domain name (FQDN), operating system distribution, and distribution version would get set, even without you manually defining those variables.

[ Need more on Ansible? Take a free technical overview course from Red Hat. Ansible Essentials: Simplicity in Automation Technical Overview . ]

Wrap up

This was a quick introduction to how Ansible playbooks look, what the different parts do, and how you can get more information about the modules. Further information is available from Ansible documentation . Check out these related articles on Enable Sysadmin Image Easing into automation with Ansible It's easier than you think to get started automating your tasks with Ansible. This gentle introduction gives you the basics you need to begin streamlining your administrative life. Posted: September 19, 2019 Author: Jörg Kastning (Red Hat Accelerator, Sudoer) Image Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks. Posted: July 31, 2019 Author: Jörg Kastning (Red Hat Accelerator, Sudoer) Image How to navigate Ansible documentation Ansible's documentation can be daunting. Here's a tour that might help. Posted: August 21, 2019 Author: Brady Thompson (Red Hat Accelerator) Topics: Linux Linux Administration Ansible Peter Gervase

I currently work as a Solutions Architect at Red Hat. I have been here for going on 14 years, moving around a bit over the years, working in front line support and consulting before my current role. In my free time, I enjoy spending time with my family, exercising, and woodworking. More about me Related Content Image Tricks and treats for sysadmins and ops Are you ready for the scary technology tricks that can haunt you as a sysadmin? Here are five treats to counter those tricks. Posted: October 30, 2020 Author: Bryant Son (Red Hat, Sudoer) Image Eight ways to protect SSH access on your system The Secure Shell is a critical tool in the administrator's arsenal. Here are eight ways you can better secure SSH, and some suggestions for basic SSH centralization. Posted: October 29, 2020 Author: Damon Garn Image Linux command basics: printf Use printf to format text or numbers. Posted: October 27, 2020 Author: Tyler Carrigan (Red Hat)

[Nov 02, 2020] Utility usersync which synchronizes (one way) users and groups within given interval of UID (min and max) with the directory or selected remote server was posted

Useful for provisioning multiple servers that use traditional authentication, and not LDAP and for synchronizing user accounts between multiple versions on Linux . Also can be used for "normalizing" servers after acquisition of another company, changing on the fly UID and GID on multiple servers, etc. Can also be used for provisioning computational nodes on small and medium HPC clusters that use traditional authentication instead of DHCP.

[Oct 30, 2020] Utility msync -- rsync wrapper that allow using multiple connections for transferring compressed archives or sets of them orginized in the tree was posted

Useful for transfer of sets of trees with huge files over WAN links. In case of huge archives they can be split into chunks of a certain size, for example 5TB and orgnized into a directory. Files are sorted into N piles, where N is specified as parameter, and each pile transmitted vi own TCP connection. useful for transmitted over WAN lines with high latency. I achieved on WAN links with 100Ms latency results comparable with Aspera using 8 channels of transmission.

[Oct 27, 2020] Utility emergency_shutdown was posted on GitHub

Useful for large RAID5 arrays without spare drive, or other RAID configurations with limited redundancy and critical data stored. Currently works with Dell DRAC only, which should be configured for passwordless ssh login from the server that runs this utility. Detects that a disk in RAID 5 array failed, informs the most recent users (default is those who login during the two months), and then shuts down the server, if not cancelled after the "waiting" period (default is five days).

[Oct 19, 2020] The utility dormant_user_stats was posted on GitHub

The utility lists all users who were inactive for the specified number of days (default is 365). Calculates I-nodes usage too. Can execute simple commands for each dormant user (lock or delete account) and generates a text file with the list (one user per line) that can be used for more complex operations.

Use

dormant_user_stats -h

for more information

[Oct 07, 2020] Perl module for automating the modification of config files?

Oct 07, 2020 | perlmonks.org

I want to make it easier to modify configuration files. For example, let's day I want to edit a postfix config file according to the directions here.

So I started writing simple code in a file that could be interpreted by perl to make the changes for me with one command per line:

uc mail_owner # "uc" is the command for "uncomment" uc hostname cv hostname {{fqdn}} # "cv" is the command for "change value", {{fqdn} + } is replaced with appropriate value ... [download]

You get the idea. I started writing some code to interpret my config file modification commands and then realized someone had to have tackled this problem before. I did a search on metacpan but came up empty. Anyone familiar with this problem space and can help point me in the right direction?


by likbez on Oct 05, 2020 at 03:16 UTC Reputation: 2

There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

FixHostsEquiv() { if [ -f /etc/hosts.equiv -a -s /etc/hosts.equiv ]; then t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a co + py..." /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG if grep -s "^+$" /etc/hosts.equiv then ed - /etc/hosts.equiv <<- ! g/^+$/d w q ! fi else t_echo 2 " No /etc/hosts.equiv - PASSES CHECK" exit 1 fi [download]

For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

On the other hand, that gives you a chance to learn splice function ;-)

If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then to create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that this diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

[Oct 05, 2020] Modular Perl in Red Hat Enterprise Linux 8 - Red Hat Developer

Notable quotes:
"... perl-DBD-SQLite ..."
"... perl-DBD-SQLite:1.58 ..."
"... perl-libwww-perl ..."
"... multi-contextual ..."
Oct 05, 2020 | developers.redhat.com

Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019

Red Hat Enterprise Linux 8 comes with modules as a packaging concept that allows system administrators to select the desired software version from multiple packaged versions. This article will show you how to manage Perl as a module.

Installing from a default stream

Let's install Perl:

# yum --allowerasing install perl
Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
Dependencies resolved.
==========================================================================================
 Package                       Arch    Version                Repository             Size
==========================================================================================
Installing:
 perl                          x86_64  4:5.26.3-416.el8       rhel-8.0.z-appstream   72 k
Installing dependencies:
[ ]
Transaction Summary
==========================================================================================
Install  147 Packages

Total download size: 21 M
Installed size: 59 M
Is this ok [y/N]: y
[ ]
  perl-threads-shared-1.58-2.el8.x86_64                                                   

Complete!

Next, check which Perl you have:

$ perl -V:version
version='5.26.3';

You have 5.26.3 Perl version. This is the default version supported for the next 10 years and, if you are fine with it, you don't have to know anything about modules. But what if you want to try a different version?

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

SIGN UP Discovering streams

Let's find out what Perl modules are available using the yum module list command:

# yum module list
Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
[ ]
Name                 Stream           Profiles     Summary
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d]         common [d]   SQLite DBI driver
perl-DBI             1.641 [d]        common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called streams in the modularity world, and they denote an independent variant, usually a different version, of the same software stack. The [d] flag marks a default stream. That means if you do not explicitly enable a different stream, the default one will be used. That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.

Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux 7, which was running in the rh-perl524 software collection environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24 on Red Hat Enterprise Linux 8.

Enabling a Stream

First, switch the Perl module to the 5.24 stream:

# yum module enable perl:5.24
Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
Problems in request:
Modular dependency problems with Defaults:

 Problem 1: conflicting requests
  - module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
 Problem 2: conflicting requests
  - module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Enabling module streams:
 perl                                     5.24

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
for details)

Here you can see a warning that the freeradius:3.0 stream is not compatible with perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules are compatible with all other modules.

Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is another warning about installed packages. The last warning means that the system still can have installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.

Changing modules and changing packages are two separate phases. You can fix it by synchronizing a distribution content like this:

# yum --allowerasing distrosync
Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
Modular dependency problems:

 Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
 Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
Dependencies resolved.
==========================================================================================
 Package           Arch   Version                              Repository            Size
==========================================================================================
[ ]
Downgrading:
 perl              x86_64 4:5.24.4-403.module+el8+2770+c759b41a
                                                               rhel-8.0.z-appstream 6.1 M
[ ]
Transaction Summary
==========================================================================================
Upgrade    69 Packages
Downgrade  66 Packages

Total download size: 20 M
Is this ok [y/N]: y
[ ]
Complete!

And try the perl command again:

$ perl -V:version
version='5.24.4';

Great! It works. We switched to a different Perl version, and the different Perl is still invoked with the perl command and is installed to a standard path ( /usr/bin/perl ). No scl enable incantation is needed, in contrast to the software collections.

You could notice the repeated warning about FreeRADIUS. A future YUM update is going to clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are compatible with any Perl stream.

Dependent modules

Let's say the old application mentioned before is using DBD::SQLite Perl module. (This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN module, so give a try:

# yum --allowerasing install 'perl(DBD::SQLite)'
[ ]
Dependencies resolved.
==========================================================================================
 Package          Arch    Version                             Repository             Size
==========================================================================================
Installing:
 perl-DBD-SQLite  x86_64  1.58-1.module+el8+2519+e351b2a7     rhel-8.0.z-appstream  186 k
Installing dependencies:
 perl-DBI         x86_64  1.641-2.module+el8+2701+78cee6b5    rhel-8.0.z-appstream  739 k
Enabling module streams:
 perl-DBD-SQLite          1.58
 perl-DBI                 1.641

Transaction Summary
==========================================================================================
Install  2 Packages

Total download size: 924 k
Installed size: 2.3 M
Is this ok [y/N]: y
[ ]
Installed:
  perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
  perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64

Complete!

Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the streams and installing the packages.

Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the Modularity modules and compare it with what you saw the first time:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24 [e]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26, which would otherwise be a default one ( [d] ). Other enabled Modularity modules are perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite. These two modules have no other streams.

In general, any module can have multiple streams. At most, one stream of a module can be the default one. And, at most, one stream of a module can be enabled. An enabled stream takes precedence over a default one. If there is no enabled or a default stream, content of the module is unavailable.

If, for some reason, you need to disable a stream, even a default one, you do that with yum module disable MODULE:STREAM command.

Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN module now. Let's create a test database, a foo table inside with one textual column called bar , and let's store a row with Hello text there:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test});
    $dbh->do(q{CREATE TABLE foo (bar text)});
    $sth=$dbh->prepare(q{INSERT INTO foo(bar) VALUES(?)});
    $sth->execute(q{Hello})'

Next, verify the Hello string was indeed stored by querying the database:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

It seems DBD::SQLite works.

Non-modular packages may not work with non-default streams

So far, everything is great and working. Now I will show what happens if you try to install an RPM package that has not been modularized and is thus compatible only with the default Perl, perl:5.26:

# yum --allowerasing install 'perl(LWP)'
[ ]
Error: 
 Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
  - cannot install the best candidate for the job
  - package perl-libs-4:5.26.3-416.el8.i686 is excluded
  - package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Yum will report an error about perl-libwww-perl RPM package being incompatible. The LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the packages from perl:5.26 stream are masked and become unavailable. However, this masking does not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that were not modularized yet. If you need some of them to be available and compatible with a non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to contact Red Hat support team with your request.

Resetting a module

Let's say you tested your old application and now you want to find out if it works with the new Perl 5.26.

To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from an enabled stream back to a default or to a yet another non-default stream is not straightforward. You'll need to perform a module reset:

# yum module reset perl
[ ]
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Resetting module streams:
 perl                                     5.24                                           

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24 RPM packages with 5.26 ones:

# yum --allowerasing distrosync
[ ]
Transaction Summary
==========================================================================================
Upgrade    65 Packages
Downgrade  71 Packages

Total download size: 22 M
Is this ok [y/N]: y
[ ]

After that, you can check the Perl version:

$ perl -V:version
version='5.26.3';

And, check the enabled modules:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641 streams remained enabled. It does not matter much because those are the only streams. Nonetheless, you can reset them back using yum module reset perl-DBI perl-DBD-SQLite if you like.

Multi-context streams

What happened with the DBD::SQLite? It's still there and working:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls. We call these modules multi-contextual . That's the case for perl-DBD-SQLite or perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you want to see these low-level details, such which contexts are available, which dependencies are required, or which packages are contained in a module, you can use the yum module info MODULE:STREAM command.

Afterword

I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise Linux 8 that enables us to provide you with multiple versions of software on top of one Linux platform. If you need more details, please read documentation accompanying the product (namely, user-space component management document and yum(8) manual page ) or ask the support team for help.

[Sep 22, 2020] Taming the tar command -- Tips for managing backups in Linux by Gabby Taylor

Sep 18, 2020 | www.redhat.com
How to append or add files to a backup

In this example, we add onto a backup backup.tar . This allows you to add additional files to the pre-existing backup backup.tar .

# tar -rvf backup.tar /path/to/file.xml

Let's break down these options:

-r - Append to archive
-v - Verbose output
-f - Name the file

How to split a backup into smaller backups

In this example, we split the existing backup into smaller archived files. You can pipe the tar command into the split command.

# tar cvf - /dir | split --bytes=200MB - backup.tar

Let's break down these options:

-c - Create the archive
-v - Verbose output
-f - Name the file

In this example, the dir/ is the directory that you want to split the backup content from. We are making 200MB backups from the /dir folder.

How to check the integrity of a tar.gz backup

In this example, we check the integrity of an existing tar archive.

To test the gzip file is not corrupt:

#gunzip -t backup.tar.gz

To test the tar file content's integrity:

#gunzip -c backup.tar.gz | tar t > /dev/null

OR

#tar -tvWF backup.tar

Let's break down these options:

-W - Verify an archive file
-t - List files of archived file
-v - Verbose output

Use pipes and greps to locate content

In this example, we use pipes and greps to locate content. The best option is already made for you. Zgrep can be utilized for gzip archives.

#zgrep <keyword> backup.tar.gz

You can also use the zcat command. This shows the content of the archive, then pipes that output to a grep .

#zcat backup.tar.gz | grep <keyword>

Egrep is a great one to use just for regular file types.

[Sep 05, 2020] documentation - How do I get the list of exit codes (and-or return codes) and meaning for a command-utility

Sep 05, 2020 | unix.stackexchange.com

What exit code should I use?

There is no "recipe" to get the meanings of an exit status of a given terminal command.

My first attempt would be the manpage:

user@host:~# man ls 
   Exit status:
       0      if OK,

       1      if minor problems (e.g., cannot access subdirectory),

       2      if serious trouble (e.g., cannot access command-line argument).

Second : Google . See wget as an example.

Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .

Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs (64-78) can be found in /usr/include/sysexits.h (or: man sysexits on BSD):

0   /* successful termination */
64  /* base value for error messages */
64  /* command line usage error */
65  /* data format error */
66  /* cannot open input */
67  /* addressee unknown */
68  /* host name unknown */
69  /* service unavailable */
70  /* internal software error */
71  /* system error (e.g., can't fork) */
72  /* critical OS file missing */
73  /* can't create (user) output file */
74  /* input/output error */
75  /* temp failure; user is invited to retry */
76  /* remote error in protocol */
77  /* permission denied */
78  /* configuration error */
/* maximum listed value */

The above list allocates previously unused exit codes from 64-78. The range of unallotted exit codes will be further restricted in the future.

However above values are mainly used in sendmail and used by pretty much nobody else, so they aren't anything remotely close to a standard (as pointed by @Gilles ).

In shell the exit status are as follow (based on Bash):

According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters.

Please note that out of range exit values can result in unexpected exit codes (e.g. exit 3809 gives an exit code of 225, 3809 % 256 = 225).

See:

You will have to look into the code/documentation. However the thing that comes closest to a "standardization" is errno.h share improve this answer follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold badge 17 17 silver badges 25 25 bronze badges

PSkocik ,

thanks for pointing the header file.. tried looking into the documentation of a few utils.. hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at 9:13

[Aug 10, 2020] How to Run and Control Background Processes on Linux

Aug 10, 2020 | www.howtogeek.com

How to Run and Control Background Processes on Linux DAVE MCKAY @thegurkha
SEPTEMBER 24, 2019, 8:00AM EDT

A shell environment on a Linux computer.
Fatmawati Achmad Zaenuri/Shutterstock.com

Use the Bash shell in Linux to manage foreground and background processes. You can use Bash's job control functions and signals to give you more flexibility in how you run commands. We show you how.

How to Speed Up a Slow PC

https://imasdk.googleapis.com/js/core/bridge3.401.2_en.html#goog_863166184 All About Processes

Whenever a program is executed in a Linux or Unix-like operating system, a process is started. "Process" is the name for the internal representation of the executing program in the computer's memory. There is a process for every active program. In fact, there is a process for nearly everything that is running on your computer. That includes the components of your graphical desktop environment (GDE) such as GNOME or KDE , and system daemons that are launched at start-up.

Why nearly everything that is running? Well, Bash built-ins such as cd , pwd , and alias do not need to have a process launched (or "spawned") when they are run. Bash executes these commands within the instance of the Bash shell that is running in your terminal window. These commands are fast precisely because they don't need to have a process launched for them to execute. (You can type help in a terminal window to see the list of Bash built-ins.)

Processes can be running in the foreground, in which case they take over your terminal until they have completed, or they can be run in the background. Processes that run in the background don't dominate the terminal window and you can continue to work in it. Or at least, they don't dominate the terminal window if they don't generate screen output.

A Messy Example

We'll start a simple ping trace running . We're going to ping the How-To Geek domain. This will execute as a foreground process.

ping www.howtogeek.com

ping www.howtogeek.com in a terminal window

We get the expected results, scrolling down the terminal window. We can't do anything else in the terminal window while ping is running. To terminate the command hit Ctrl+C .

Ctrl+C

ping trace output in a terminal window

The visible effect of the Ctrl+C is highlighted in the screenshot. ping gives a short summary and then stops.

Let's repeat that. But this time we'll hit Ctrl+Z instead of Ctrl+C . The task won't be terminated. It will become a background task. We get control of the terminal window returned to us.

ping www.howtogeek.com
Ctrl+Z

effect of Ctrl+Z on a command running in a terminal window

The visible effect of hitting Ctrl+Z is highlighted in the screenshot.

This time we are told the process is stopped. Stopped doesn't mean terminated. It's like a car at a stop sign. We haven't scrapped it and thrown it away. It's still on the road, stationary, waiting to go. The process is now a background job .

The jobs command will list the jobs that have been started in the current terminal session. And because jobs are (inevitably) processes, we can also use the ps command to see them. Let's use both commands and compare their outputs. We'll use the T option (terminal) option to only list the processes that are running in this terminal window. Note that there is no need to use a hyphen - with the T option.

jobs
ps T

jobs command in a terminal window

The jobs command tells us:

The ps command tells us:

These are common values for the STAT column:

The value in the STAT column can be followed by one of these extra indicators:

We can see that Bash has a state of Ss . The uppercase "S" tell us the Bash shell is sleeping, and it is interruptible. As soon as we need it, it will respond. The lowercase "s" tells us that the shell is a session leader.

The ping command has a state of T . This tells us that ping has been stopped by a job control signal. In this example, that was the Ctrl+Z we used to put it into the background.

The ps T command has a state of R , which stands for running. The + indicates that this process is a member of the foreground group. So the ps T command is running in the foreground.

The bg Command

The bg command is used to resume a background process. It can be used with or without a job number. If you use it without a job number the default job is brought to the foreground. The process still runs in the background. You cannot send any input to it.

If we issue the bg command, we will resume our ping command:

bg

bg in a terminal window

The ping command resumes and we see the scrolling output in the terminal window once more. The name of the command that has been restarted is displayed for you. This is highlighted in the screenshot.

resumed ping background process with output in a terminal widow

But we have a problem. The task is running in the background and won't accept input. So how do we stop it? Ctrl+C doesn't do anything. We can see it when we type it but the background task doesn't receive those keystrokes so it keeps pinging merrily away.

Background task ignoring Ctrl+C in a terminal window

In fact, we're now in a strange blended mode. We can type in the terminal window but what we type is quickly swept away by the scrolling output from the ping command. Anything we type takes effect in the foregound.

To stop our background task we need to bring it to the foreground and then stop it.

The fg Command

The fg command will bring a background task into the foreground. Just like the bg command, it can be used with or without a job number. Using it with a job number means it will operate on a specific job. If it is used without a job number the last command that was sent to the background is used.

If we type fg our ping command will be brought to the foreground. The characters we type are mixed up with the output from the ping command, but they are operated on by the shell as if they had been entered on the command line as usual. And in fact, from the Bash shell's point of view, that is exactly what has happened.

fg

fg command mixed in with the output from ping in a terminal window

And now that we have the ping command running in the foreground once more, we can use Ctrl+C to kill it.

Ctrl+C

Ctrl+C stopping the ping command in a terminal window

We Need to Send the Right Signals

That wasn't exactly pretty. Evidently running a process in the background works best when the process doesn't produce output and doesn't require input.

But, messy or not, our example did accomplish:

When you use Ctrl+C and Ctrl+Z , you are sending signals to the process. These are shorthand ways of using the kill command. There are 64 different signals that kill can send. Use kill -l at the command line to list them. kill isn't the only source of these signals. Some of them are raised automatically by other processes within the system

Here are some of the commonly used ones.

We must use the kill command to issue signals that do not have key combinations assigned to them.

Further Job Control

A process moved into the background by using Ctrl+Z is placed in the stopped state. We have to use the bg command to start it running again. To launch a program as a running background process is simple. Append an ampersand & to the end of the command line.

Although it is best that background processes do not write to the terminal window, we're going to use examples that do. We need to have something in the screenshots that we can refer to. This command will start an endless loop as a background process:

while true; do echo "How-To Geek Loop Process"; sleep 3; done &

while true; do echo "How-To Geek Loop Process"; sleep 3; done & in a terminal window

We are told the job number and process ID id of the process. Our job number is 1, and the process id is 1979. We can use these identifiers to control the process.

The output from our endless loop starts to appear in the terminal window. As before, we can use the command line but any commands we issue are interspersed with the output from the loop process.

ls

output of the background loop process interspersed with output from other commands

To stop our process we can use jobs to remind ourselves what the job number is, and then use kill .

jobs reports that our process is job number 1. To use that number with kill we must precede it with a percent sign % .

jobs
kill %1

jobs and kill %1 in a terminal window

kill sends the SIGTERM signal, signal number 15, to the process and it is terminated. When the Enter key is next pressed, a status of the job is shown. It lists the process as "terminated." If the process does not respond to the kill command you can take it up a notch. Use kill with SIGKILL , signal number 9. Just put the number 9 between the kill command the job number.

kill 9 %1
Things We've Covered

RELATED: How to Kill Processes From the Linux Terminal

[Jul 30, 2020] ports tree

Jul 30, 2020 | opensource.com

. On Mac, use Homebrew .

For example, on RHEL or Fedora:

$ sudo dnf install tmux
Start tmux

To start tmux, open a terminal and type:

$ tmux

When you do this, the obvious result is that tmux launches a new shell in the same window with a status bar along the bottom. There's more going on, though, and you can see it with this little experiment. First, do something in your current terminal to help you tell it apart from another empty terminal:

$ echo hello
hello

Now press Ctrl+B followed by C on your keyboard. It might look like your work has vanished, but actually, you've created what tmux calls a window (which can be, admittedly, confusing because you probably also call the terminal you launched a window ). Thanks to tmux, you actually have two windows open, both of which you can see listed in the status bar at the bottom of tmux. You can navigate between these two windows by index number. For instance, press Ctrl+B followed by 0 to go to the initial window:

$ echo hello
hello

Press Ctrl+B followed by 1 to go to the first new window you created.

You can also "walk" through your open windows using Ctrl+B and N (for Next) or P (for Previous).

The tmux trigger and commands More Linux resources The keyboard shortcut Ctrl+B is the tmux trigger. When you press it in a tmux session, it alerts tmux to "listen" for the next key or key combination that follows. All tmux shortcuts, therefore, are prefixed with Ctrl+B .

You can also access a tmux command line and type tmux commands by name. For example, to create a new window the hard way, you can press Ctrl+B followed by : to enter the tmux command line. Type new-window and press Enter to create a new window. This does exactly the same thing as pressing Ctrl+B then C .

Splitting windows into panes

Once you have created more than one window in tmux, it's often useful to see them all in one window. You can split a window horizontally (meaning the split is horizontal, placing one window in a North position and another in a South position) or vertically (with windows located in West and East positions).

You can split windows that have been split, so the layout is up to you and the number of lines in your terminal.

tmux_golden-ratio.jpg

(Seth Kenlon, CC BY-SA 4.0 )

Sometimes things can get out of hand. You can adjust a terminal full of haphazardly split panes using these quick presets:

Switching between panes

To get from one pane to another, press Ctrl+B followed by O (as in other ). The border around the pane changes color based on your position, and your terminal cursor changes to its active state. This method "walks" through panes in order of creation.

Alternatively, you can use your arrow keys to navigate to a pane according to your layout. For example, if you've got two open panes divided by a horizontal split, you can press Ctrl+B followed by the Up arrow to switch from the lower pane to the top pane. Likewise, Ctrl+B followed by the Down arrow switches from the upper pane to the lower one.

Running a command on multiple hosts with tmux

Now that you know how to open many windows and divide them into convenient panes, you know nearly everything you need to know to run one command on multiple hosts at once. Assuming you have a layout you're happy with and each pane is connected to a separate host, you can synchronize the panes such that the input you type on your keyboard is mirrored in all panes.

To synchronize panes, access the tmux command line with Ctrl+B followed by : , and then type setw synchronize-panes .

Now anything you type on your keyboard appears in each pane, and each pane responds accordingly.

Download our cheat sheet

It's relatively easy to remember Ctrl+B to invoke tmux features, but the keys that follow can be difficult to remember at first. All built-in tmux keyboard shortcuts are available by pressing Ctrl+B followed by ? (exit the help screen with Q ). However, the help screen can be a little overwhelming for all its options, none of which are organized by task or topic. To help you remember the basic features of tmux, as well as many advanced functions not covered in this article, we've developed a tmux cheatsheet . It's free to download, so get your copy today.

Download our tmux cheat sheet today! How tmux sparks joy in your Linux terminal Organize your terminal like Marie Kondo with tmux. S. Hayes Use tmux to create the console of your dreams You can do a lot with tmux, especially when you add tmuxinator to the mix. Check them out in the fifteenth in our series on 20 ways to be more productive with open source in 2020. Kevin Sonney (Correspondent) Customizing my Linux terminal with tmux and Git Set up your console so you always know where you are and what to do next. Moshe Zadka (Correspondent) Topics Linux

[Jul 29, 2020] Linux Commands- jobs, bg, and fg by Tyler Carrigan

Jul 23, 2020 | www.redhat.com
Image

Photo by Andrea Piacquadio from Pexels

More Linux resources

In this quick tutorial, I want to look at the jobs command and a few of the ways that we can manipulate the jobs running on our systems. In short, controlling jobs lets you suspend and resume processes started in your Linux shell.

Jobs

The jobs command will list all jobs on the system; active, stopped, or otherwise. Before I explore the command and output, I'll create a job on my system.

I will use the sleep job as it won't change my system in any meaningful way.

[tcarrigan@rhel ~]$ sleep 500
^Z
[1]+  Stopped                 sleep 500

First, I issued the sleep command, and then I received the Job number [1]. I then immediately stopped the job by using Ctl+Z . Next, I run the jobs command to view the newly created job:

[tcarrigan@rhel ~]$ jobs
[1]+  Stopped                 sleep 500

You can see that I have a single stopped job identified by the job number [1] .

Other options to know for this command include:

Background

Next, I'll resume the sleep job in the background. To do this, I use the bg command. Now, the bg command has a pretty simple syntax, as seen here:

bg [JOB_SPEC]

Where JOB_SPEC can be any of the following:

NOTE : bg and fg operate on the current job if no JOB_SPEC is provided.

I can move this job to the background by using the job number [1] .

[tcarrigan@rhel ~]$ bg %1
[1]+ sleep 500 &

You can see now that I have a single running job in the background.

[tcarrigan@rhel ~]$ jobs
[1]+  Running                 sleep 500 &
Foreground

Now, let's look at how to move a background job into the foreground. To do this, I use the fg command. The command syntax is the same for the foreground command as with the background command.

fg [JOB_SPEC]

Refer to the above bullets for details on JOB_SPEC.

I have started a new sleep in the background:

[tcarrigan@rhel ~]$ sleep 500 &
[2] 5599

Now, I'll move it to the foreground by using the following command:

[tcarrigan@rhel ~]$ fg %2
sleep 500

The fg command has now brought my system back into a sleep state.

The end

While I realize that the jobs presented here were trivial, these concepts can be applied to more than just the sleep command. If you run into a situation that requires it, you now have the knowledge to move running or stopped jobs from the foreground to background and back again.

[Jul 29, 2020] 10 Linux commands to know the system - nixCraft

Jul 29, 2020 | www.cyberciti.biz

10 Linux commands to know the system

Open the terminal application and then start typing these commands to know your Linux desktop or cloud server/VM.

1. free – get free and used memory

Are you running out of memory? Use the free command to show the total amount of free and used physical (RAM) and swap memory in the Linux system. It also displays the buffers and caches used by the kernel:
free
# human readable outputs
free -h
# use the cat command to find geeky details
cat /proc/meminfo

Linux display amount of free and used memory in the system
However, the free command will not give information about memory configurations, maximum supported memory by the Linux server , and Linux memory speed . Hence, we must use the dmidecode command:
sudo dmidecode -t memory
Want to determine the amount of video memory under Linux, try:
lspci | grep -i vga
glxinfo | egrep -i 'device|memory'

See " Linux Find Out Video Card GPU Memory RAM Size Using Command Line " and " Linux Check Memory Usage Using the CLI and GUI " for more information.

2. hwinfo – probe for hardware

We can quickly probe for the hardware present in the Linux server or desktop:
# Find detailed info about the Linux box
hwinfo
# Show only a summary #
hwinfo --short
# View all disks #
hwinfo --disk
# Get an overview #
hwinfo --short --block
# Find a particular disk #
hwinfo --disk --only /dev/sda
hwinfo --disk --only /dev/sda
# Try 4 graphics card ports for monitor data #
hwprobe=bios.ddc.ports=4 hwinfo --monitor
# Limit info to specific devices #
hwinfo --short --cpu --disk --listmd --gfxcard --wlan --printer

hwinfo
Alternatively, you may find the lshw command and inxi command useful to display your Linux hardware information:
sudo lshw -short
inxi -Fxz

inxi
inxi is system information tool to get system configurations and hardware. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, gcc version(s), Processes, RAM usage, and a wide variety of other useful information [Click to enlarge]
3. id – know yourself

Display Linux user and group information for the given USER name. If user name omitted show information for the current user:
id

uid=1000(vivek) gid=1000(vivek) groups=1000(vivek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),115(lpadmin),116(sambashare),998(lxd)

See who is logged on your Linux server:
who
who am i

4. lsblk – list block storage devices

All Linux block devices give buffered access to hardware devices and allow reading and writing blocks as per configuration. Linux block device has names. For example, /dev/nvme0n1 for NVMe and /dev/sda for SCSI devices such as HDD/SSD. But you don't have to remember them. You can list them easily using the following syntax:
lsblk
# list only #
lsblk -l
# filter out loop devices using the grep command #
lsblk -l | grep '^loop'

NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0             9:0    0   3.7G  0 raid1 /boot
md1             9:1    0 949.1G  0 raid1 
md1_crypt     253:0    0 949.1G  0 crypt 
nixcraft-swap 253:1    0 119.2G  0 lvm   [SWAP]
nixcraft-root 253:2    0 829.9G  0 lvm   /
nvme1n1       259:0    0 953.9G  0 disk  
nvme1n1p1     259:1    0   953M  0 part  
nvme1n1p2     259:2    0   3.7G  0 part  
nvme1n1p3     259:3    0 949.2G  0 part  
nvme0n1       259:4    0 953.9G  0 disk  
nvme0n1p1     259:5    0   953M  0 part  /boot/efi
nvme0n1p2     259:6    0   3.7G  0 part  
nvme0n1p3     259:7    0 949.2G  0 part
5. lsb_release – Linux distribution information

Want to get distribution-specific information such as, description of the currently installed distribution, release number and code name:
lsb_release -a
No LSB modules are available.

Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal
6. lscpu – display info about the CPUs

The lscpu command gathers and displays CPU architecture information in an easy-to-read format for humans including various CPU bugs:
lscpu

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          12
On-line CPU(s) list:             0-11
Thread(s) per core:              2
Core(s) per socket:              6
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           158
Model name:                      Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
Stepping:                        13
CPU MHz:                         976.324
CPU max MHz:                     4600.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5199.98
Virtualization:                  VT-x
L1d cache:                       192 KiB
L1i cache:                       192 KiB
L2 cache:                        1.5 MiB
L3 cache:                        12 MiB
NUMA node0 CPU(s):               0-11
Vulnerability Itlb multihit:     KVM: Mitigation: Split huge pages
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds:             Mitigation; TSX disabled
Vulnerability Tsx async abort:   Mitigation; TSX disabled
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_g
                                 ood nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes x
                                 save avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep 
                                 bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities

Cpu can be listed using the lshw command too:
sudo lshw -C cpu

7. lstopo – display hardware topology

Want to see the topology of the Linux server or desktop? Try:
lstopo
lstopo-no-graphics

Linux show the topology of the system command
You will see information about:

  1. NUMA memory nodes
  2. shared caches
  3. CPU packages
  4. Processor cores
  5. processor "threads" and more
8. lsusb – list usb devices

We all use USB devices, such as external hard drives and keyboards. Run the NA command for displaying information about USB buses in the Linux system and the devices connected to them.
lsusb
# Want a graphical summary of USB devices connected to the system? #
sudo usbview

usbview
usbview provides a graphical summary of USB devices connected to the system. Detailed information may be displayed by selecting individual devices in the tree display
lspci – list PCI devices

We use the lspci command for displaying information about PCI buses in the system and devices connected to them:
lspci
lspci

9. timedatectl – view current date and time zone

Typically we use the date command to set or get date/time information on the CLI:
date
However, modern Linux distro use the timedatectl command to query and change the system clock and its settings, and enable or disable time synchronization services (NTPD and co):
timedatectl

               Local time: Sun 2020-07-26 16:31:10 IST
           Universal time: Sun 2020-07-26 11:01:10 UTC
                 RTC time: Sun 2020-07-26 11:01:10    
                Time zone: Asia/Kolkata (IST, +0530)  
System clock synchronized: yes                        
              NTP service: active                     
          RTC in local TZ: no
10. w – who is logged in

Run the w command on Linux to see information about the Linux users currently on the machine, and their processes:

$ w

Conclusion

And this concluded our ten Linux commands to know the system to increase your productivity quickly to solve problems. Let me know about your favorite tool in the comment section below.

[Jul 20, 2020] Direnv - Manage Project-Specific Environment Variables in Linux by Aaron Kili

What is the value of this utility in comparison with "environment variables" package. Is this reinvention of the wheel?
It allow "perdirectory" ,envrc file which contain specific for this directory env viruables. So it is simply load them and we can do this in bash without instlling this new utilitiy with uncler vale.
Did the author knew about existence of "environment modules" when he wrote it ?
Jul 10, 2020 | www.tecmint.com

direnv is a nifty open-source extension for your shell on a UNIX operating system such as Linux and macOS. It is compiled into a single static executable and supports shells such as bash , zsh , tcsh , and fish .

The main purpose of direnv is to allow for project-specific environment variables without cluttering ~/.profile or related shell startup files. It implements a new way to load and unload environment variables depending on the current directory.

It is used to load 12factor apps (a methodology for building software-as-a-service apps) environment variables, create per-project isolated development environments, and also load secrets for deployment. Additionally, it can be used to build multi-version installation and management solutions similar to rbenv , pyenv , and phpenv .

So How Does direnv Works?

Before the shell loads a command prompt, direnv checks for the existence of a .envrc file in the current (which you can display using the pwd command ) and parent directory. The checking process is swift and can't be noticed on each prompt.

Once it finds the .envrc file with the appropriate permissions, it loads it into a bash sub-shell and it captures all exported variables and makes them available to the current shell.

... ... ...

How to Use direnv in Linux Shell

To demonstrate how direnv works, we will create a new directory called tecmint_projects and move into it.

$ mkdir ~/tecmint_projects
$ cd tecmint_projects/

Next, let's create a new variable called TEST_VARIABLE on the command line and when it is echoed , the value should be empty:

$ echo $TEST_VARIABLE

Now we will create a new .envrc file that contains Bash code that will be loaded by direnv . We also try to add the line " export the TEST_VARIABLE=tecmint " in it using the echo command and the output redirection character (>) :

$ echo export TEST_VARIABLE=tecmint > .envrc

By default, the security mechanism blocks the loading of the .envrc file. Since we know it a secure file, we need to approve its content by running the following command:

$ direnv allow .

Now that the content of .envrc file has been allowed to load, let's check the value of TEST_VARIABLE that we set before:

$ echo $TEST_VARIABLE

When we exit the tecmint_project directory, the direnv will be unloaded and if we check the value of TEST_VARIABLE once more, it should be empty:

$ cd ..
$ echo $TEST_VARIABLE
Demonstration of How direnv Works in Linux
Demonstration of How direnv Works in Linux

Every time you move into the tecmint_projects directory, the .envrc file will be loaded as shown in the following screenshot:

$ cd tecmint_projects/
Loading envrc File in a Directory
Loading envrc File in a Directory

To revoke the authorization of a given .envrc , use the deny command.

$ direnv deny .			#in current directory
OR
$ direnv deny /path/to/.envrc

For more information and usage instructions, see the direnv man page:

$ man direnv

Additionally, direnv also uses a stdlib ( direnv-stdlib ) comes with several functions that allow you to easily add new directories to your PATH and do so much more.

[Jul 14, 2020] Important Linux -proc filesystem files you need to know - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

The /proc files I find most valuable, especially for inherited system discovery, are:

And the most valuable of those are cpuinfo and meminfo .

Again, I'm not stating that other files don't have value, but these are the ones I've found that have the most value to me. For example, the /proc/uptime file gives you the system's uptime in seconds. For me, that's not particularly valuable. However, if I want that information, I use the uptime command that also gives me a more readable version of /proc/loadavg as well.

By comparison:

$ cat /proc/uptime
46901.13 46856.69

$ cat /proc/loadavg 
0.00 0.01 0.03 2/111 2039

$ uptime
 00:56:13 up 13:01,  2 users,  load average: 0.00, 0.01, 0.03

I think you get the idea.

/proc/cmdline

This file shows the parameters passed to the kernel at the time it is started.

$ cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1062.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

The value of this information is in how the kernel was booted because any switches or special parameters will be listed here, too. And like all information under /proc , it can be found elsewhere and usually with better formatting, but /proc files are very handy when you can't remember the command or don't want to grep for something.

/proc/cpuinfo

The /proc/cpuinfo file is the first file I check when connecting to a new system. I want to know the CPU make-up of a system and this file tells me everything I need to know.

$ cat /proc/cpuinfo 

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 142
model name      : Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
stepping        : 9
cpu MHz         : 2303.998
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase avx2 invpcid rdseed clflushopt md_clear flush_l1d
bogomips        : 4607.99
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

This is a virtual machine and only has one vCPU. If your system contains more than one CPU, the CPU numbering begins at 0 for the first CPU.

/proc/meminfo

The /proc/meminfo file is the second file I check on a new system. It gives me a general and a specific look at a system's memory allocation and usage.

$ cat /proc/meminfo 
MemTotal:        1014824 kB
MemFree:          643608 kB
MemAvailable:     706648 kB
Buffers:            1072 kB
Cached:           185568 kB
SwapCached:            0 kB
Active:           187568 kB
Inactive:          80092 kB
Active(anon):      81332 kB
Inactive(anon):     6604 kB
Active(file):     106236 kB
Inactive(file):    73488 kB
Unevictable:           0 kB
Mlocked:               0 kB
***Output truncated***

I think most sysadmins either use the free or the top command to pull some of the data contained here. The /proc/meminfo file gives me a quick memory overview that I like and can redirect to another file as a snapshot.

/proc/version

The /proc/version command provides more information than the related uname -a command does. Here are the two compared:

$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019

$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Usually, the uname -a command is sufficient to give you kernel version info but for those of you who are developers or who are ultra-concerned with details, the /proc/version file is there for you.

Wrapping up

The /proc filesystem has a ton of valuable information available to system administrators who want a convenient, non-command way of getting at raw system info. As I stated earlier, there are other ways to display the information in /proc . Additionally, some of the /proc info isn't what you'd want to use for system assessment. For example, use commands such as vmstat 5 5 or iostat 5 5 to get a better picture of system performance rather than reading one of the available /proc files.

[Jul 14, 2020] Sysadmin tales- How to keep calm and not panic when things break by Glen Newell

Jul 10, 2020 | www.redhat.com

Sysadmin tales: How to keep calm and not panic when things break When an incident occurs, resist the urge to freak out. Instead, use these tips to help you keep your cool and find a solution.

I was working on several projects simultaneously for a small company that had been carved out of a larger one that had gone out of business. The smaller company had inherited some of the bigger company's infrastructure, and all the headaches along with it. That day, I had some additional consultants working with me on a project to migrate email service from a large proprietary onsite cluster to a cloud provider, while at the same time, I was working on reconfiguring a massive storage array.

At some point, I clicked the wrong button. All of a sudden, I started getting calls. The CIO and the consultants were standing in front of my desk. The email servers were completely offline -- they responded, but could not access the backing storage. I didn't know it yet, but I had deleted the storage pool for the active email servers.

My vision blurred into a tunnel, and my stomach fell into a bottomless pit. I struggled to breathe. I did my best to maintain a poker face as the executives and consultants watched impatiently. I scanned logs and messages looking for clues. I ran tests on all the components to find the source of the issue and came up with nothing. The data seemed to be gone, and panic was setting in.

I pushed back from the desk and excused myself to use the restroom. Closing and latching the door behind me, I contemplated my fate for a moment, then splashed cold water on my face and took a deep breath. Then it dawned on me: earlier, I had set up an active mirror of that storage pool. The data was all there; I just needed to reconnect it.

I returned to my desk and couldn't help a bit of a smirk. A couple of commands, a couple of clicks, and a sip of coffee. About five minutes of testing, and I could say, "Sorry, guys. Should be good now." The whole thing had happened in about 30 minutes.

We've all been there

Everyone makes mistakes, even the most senior and venerable engineers and systems administrators. We're all human. It just so happens that, as a sysadmin, a small mistake in a moment can cause very visible problems, and, PANIC. This is normal, though. What separates the hero from the unemployed in that moment, can be just a few simple things.

When an incident occurs, focusing on who's at fault can be tempting; blame is something we know how to do and can do something about, and it can even offer some relief if we can tell ourselves it's not our fault. But in fact, blame accomplishes nothing and can be counterproductive in a moment of crisis -- it can distract us from finding a solution to the problem, and create even more stress.

Backups, backups, backups

This is just one of the times when having a backup saved the day for me, and for a client. Every sysadmin I've ever worked with will tell you the same thing -- always have a backup. Do regular backups. Make backups of configurations you are working on. Make a habit of creating a backup as the first step in any project. There are some great articles here on Enable Sysadmin about the various things you can do to protect yourself.

Another good practice is to never work on production systems until you have tested the change. This may not always be possible, but if it is, the extra effort and time will be well worth it for the rare occasions when you have an unexpected result, so you can avoid the panic of wondering where you might have saved your most recent resume. Having a plan and being prepared can go a long way to avoiding those very stressful situations.

Breathe in, breathe out

The panic response in humans is related to the "fight or flight" reflex, which served our ancestors so well. It's a really useful resource for avoiding saber tooth tigers (and angry CFOs), but not so much for understanding and solving complex technical problems. Understanding that it's normal but not really helpful, we can recognize it and find a way to overcome it in the moment.

The simplest way we can tame the impulse to blackout and flee is to take a deep breath (or several). Studies have shown that simple breathing exercises and meditation can improve our general outlook and ability to focus on a specific task. There is also evidence that temperature changes can make a difference; something as simple as a splash of water on the face or an ice-cold beverage can calm a panic. These things work for me.

Walk the path of troubleshooting, one step at a time

Once we have convinced ourselves that the world is not going to end immediately, we can focus on solving the problem. Take the situation one element, one step at a time to find what went wrong, then take that and apply the solution(s) systematically. Again, it's important to focus on the problem and solution in front of you rather than worrying about things you can't do anything about right now or what might happen later. Remember, blame is not helpful, and that includes blaming yourself.

Most often, when I focus on the problem, I find that I forget to panic, and I can do even better work on the solution. Many times, I have found solutions I wouldn't have seen or thought of otherwise in this state.

Take five

Another thing that's easy to forget is that, when you've been working on a problem, it's important to give yourself a break. Drink some water. Take a short walk. Rest your brain for a couple of minutes. Hunger, thirst, and fatigue can lead to less clear thinking and, you guessed it, panic.

Time to face the music

My last piece of advice -- though certainly not the least important -- is, if you are responsible for an incident, be honest about what happened. This will benefit you for both the short and long term.

During the early years of the space program, the directors and engineers at NASA established a routine of getting together and going over what went wrong and what and how to improve for the next time. The same thing happens in the military, emergency management, and healthcare fields. It's also considered good agile/DevOps practice. Some of the smartest, highest-strung engineers, administrators, and managers I've known and worked with -- people with millions of dollars and thousands of lives in their area of responsibility -- have insisted on the importance of learning lessons from mistakes and incidents. It's a mark of a true professional to own up to mistakes and work to improve.

It's hard to lose face, but not only will your colleagues appreciate you taking responsibility and working to improve the team, but I promise you will rest better and be able to manage the next problem better if you look at these situations as learning opportunities.

Accidents and mistakes can't ever be avoided entirely, but hopefully, you will find some of this advice useful the next time you face an unexpected challenge.

[ Want to test your sysadmin skills? Take a skills assessment today . ]

[Jul 14, 2020] Linux stories- When backups saved the day - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

I set up a backup approach that software vendors refer to as instant restore, shadow restore, preemptive restore, or similar term. We ran incremental backup jobs every hour and restored the backups in the background to a new virtual machine. Each full hour, we had a system ready that was four hours back in time and just needed to be finished. So if I choose to restore the incremental from one hour ago, it would take less time than a complete system restore because only the small increments had to be restored to the almost-ready virtual machine.

And the effort paid off

One day, I was on vacation, having a barbecue and some beer, when I got a call from my colleague telling me that the terminal server with the ERP application was broken due to a failed update and the guy who ran the update forgot to take a snapshot first.

The only thing I needed to tell my colleague was to shut down the broken machine, find the UI of our backup/restore system, and then identify the restore job. Finally, I told him how to choose the timestamp from the last four hours when the restore should finish. The restore finished 30 minutes later, and the system was ready to be used again. We were back in action after a total of 30 minutes, and only the work from the last two hours or so was lost! Awesome! Now, back to vacation.

[Jul 12, 2020] Testing your Bash script by David Both

Dec 21, 2019 | opensource.com

Get the highlights in your inbox every week.

In the first article in this series, you created your first, very small, one-line Bash script and explored the reasons for creating shell scripts. In the second article , you began creating a fairly simple template that can be a starting point for other Bash programs and began testing it. In the third article , you created and used a simple Help function and learned about using functions and how to handle command-line options such as -h .

This fourth and final article in the series gets into variables and initializing them as well as how to do a bit of sanity testing to help ensure the program runs under the proper conditions. Remember, the objective of this series is to build working code that will be used for a template for future Bash programming projects. The idea is to make getting started on new programming projects easy by having common elements already available in the template.

Variables

The Bash shell, like all programming languages, can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable. If you are not familiar with using variables, read my article How to program with Bash: Syntax and tools before you go further.

Done? Great! Let's now look at some good practices when using variables.

More on Bash I always set initial values for every variable used in my scripts. You can find this in your template script immediately after the procedures as the first part of the main program body, before it processes the options. Initializing each variable with an appropriate value can prevent errors that might occur with uninitialized variables in comparison or math operations. Placing this list of variables in one place allows you to see all of the variables that are supposed to be in the script and their initial values.

Your little script has only a single variable, $option , so far. Set it by inserting the following lines as shown:


# Main program #

# Initialize variables
option = ""

# Process the input options. Add options as needed. #

Test this to ensure that everything works as it should and that nothing has broken as the result of this change.

Constants

Constants are variables, too -- at least they should be. Use variables wherever possible in command-line interface (CLI) programs instead of hard-coded values. Even if you think you will use a particular value (such as a directory name, a file name, or a text string) just once, create a variable and use it where you would have placed the hard-coded name.

For example, the message printed as part of the main body of the program is a string literal, echo "Hello world!" . Change that to a variable. First, add the following statement to the variable initialization section:

Msg="Hello world!"

And now change the last line of the program from:

echo "Hello world!"

to:

echo "$Msg"

Test the results.

Sanity checks

Sanity checks are simply tests for conditions that need to be true in order for the program to work correctly, such as: the program must be run as the root user, or it must run on a particular distribution and release of that distro. Add a check for root as the running user in your simple program template.

Testing that the root user is running the program is easy because a program runs as the user that launches it.

The id command can be used to determine the numeric user ID (UID) the program is running under. It provides several bits of information when it is used without any options:

[ student @ testvm1 ~ ] $ id
uid = 1001 ( student ) gid = 1001 ( student ) groups = 1001 ( student ) , 5000 ( dev )

Using the -u option returns just the user's UID, which is easily usable in your Bash program:

[ student @ testvm1 ~ ] $ id -u
1001
[ student @ testvm1 ~ ] $

Add the following function to the program. I added it after the Help procedure, but you can place it anywhere in the procedures section. The logic is that if the UID is not zero, which is always the root user's UID, the program exits:

################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}

Now, add a call to the CheckRoot procedure just before the variable's initialization. Test this, first running the program as the student user:

[ student @ testvm1 ~ ] $ . / hello
ERROR: You must be root user to run this program
[ student @ testvm1 ~ ] $

then as the root user:

[ root @ testvm1 student ] # ./hello
Hello world !
[ root @ testvm1 student ] #

You may not always need this particular sanity test, so comment out the call to CheckRoot but leave all the code in place in the template. This way, all you need to do to use that code in a future program is to uncomment the call.

The code

After making the changes outlined above, your code should look like this:

#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# LinuxGeek46@both.org #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################

################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}

################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
# If we are not running as root we exit the program
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################

################################################################################
# Sanity checks #
################################################################################
# Are we rnning as root?
# CheckRoot

# Initialize variables
option = ""
Msg = "Hello world!"
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done

echo " $Msg " A final exercise

You probably noticed that the Help function in your code refers to features that are not in the code. As a final exercise, figure out how to add those functions to the code template you created.

Summary

In this article, you created a couple of functions to perform a sanity test for whether your program is running as root. Your program is getting a little more complex, so testing is becoming more important and requires more test paths to be complete.

This series looked at a very minimal Bash program and how to build a script up a bit at a time. The result is a simple template that can be the starting point for other, more useful Bash scripts and that contains useful elements that make it easy to start new scripts.

By now, you get the idea: Compiled programs are necessary and fill a very important need. But for sysadmins, there is always a better way. Always use shell scripts to meet your job's automation needs. Shell scripts are open; their content and purpose are knowable. They can be readily modified to meet different requirements. I have never found anything that I need to do in my sysadmin role that cannot be accomplished with a shell script.

What you have created so far in this series is just the beginning. As you write more Bash programs, you will find more bits of code that you use frequently and should be included in your program template.

Resources

[Jul 12, 2020] Creating a Bash script template by David Both

Dec 19, 2019 | opensource.com

In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs.

In this second article, you will begin creating a Bash script template that can be used as a starting point for other Bash scripts. The template will ultimately contain a Help facility, a licensing statement, a number of simple functions, and some logic to deal with those options and others that might be needed for the scripts that will be based on this template.

Why create a template? More on sysadmins Like automation in general, the idea behind creating a template is to be the " lazy sysadmin ." A template contains the basic components that you want in all of your scripts. It saves time compared to adding those components to every new script and makes it easy to start a new script.

Although it can be tempting to just throw a few command-line Bash statements together into a file and make it executable, that can be counterproductive in the long run. A well-written and well-commented Bash program with a Help facility and the capability to accept command-line options provides a good starting point for sysadmins who maintain the program, which includes the programs that you write and maintain.

The requirements

You should always create a set of requirements for every project you do. This includes scripts, even if it is a simple list with only two or three items on it. I have been involved in many projects that either failed completely or failed to meet the customer's needs, usually due to the lack of a requirements statement or a poorly written one.

The requirements for this Bash template are pretty simple:

  1. Create a template that can be used as the starting point for future Bash programming projects.
  2. The template should follow standard Bash programming practices.
  3. It must include:
    • A heading section that can be used to describe the function of the program and a changelog
    • A licensing statement
    • A section for functions
    • A Help function
    • A function to test whether the program user is root
    • A method for evaluating command-line options
The basic structure

A basic Bash script has three sections. Bash has no way to delineate sections, but the boundaries between the sections are implicit.

That is all there is -- just three sections in the structure of any Bash program.

Leading comments

I always add more than this for various reasons. First, I add a couple of sections of comments immediately after the shebang. These comment sections are optional, but I find them very helpful.

The first comment section is the program name and description and a change history. I learned this format while working at IBM, and it provides a method of documenting the long-term development of the program and any fixes applied to it. This is an important start in documenting your program.

The second comment section is a copyright and license statement. I use GPLv2, and this seems to be a standard statement for programs licensed under GPLv2. If you use a different open source license, that is fine, but I suggest adding an explicit statement to the code to eliminate any possible confusion about licensing. Scott Peterson's article The source code is the license helps explain the reasoning behind this.

So now the script looks like this:

#!/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# LinuxGeek46@both.org #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################

echo "hello world!"

Run the revised program to verify that it still works as expected.

About testing

Now is a good time to talk about testing.

" There is always one more bug."
-- Lubarsky's Law of Cybernetic Entomology

Lubarsky -- whoever that might be -- is correct. You can never find all the bugs in your code. For every bug I find, there always seems to be another that crops up, usually at a very inopportune time.

Testing is not just about programs. It is also about verification that problems -- whether caused by hardware, software, or the seemingly endless ways users can find to break things -- that are supposed to be resolved actually are. Just as important, testing is also about ensuring that the code is easy to use and the interface makes sense to the user.

Following a well-defined process when writing and testing shell scripts can contribute to consistent and high-quality results. My process is simple:

  1. Create a simple test plan.
  2. Start testing right at the beginning of development.
  3. Perform a final test when the code is complete.
  4. Move to production and test more.
The test plan

There are lots of different formats for test plans. I have worked with the full range -- from having it all in my head; to a few notes jotted down on a sheet of paper; and all the way to a complex set of forms that require a full description of each test, which functional code it would test, what the test would accomplish, and what the inputs and results should be.

Speaking as a sysadmin who has been (but is not now) a tester, I try to take the middle ground. Having at least a short written test plan will ensure consistency from one test run to the next. How much detail you need depends upon how formal your development and test functions are.

The sample test plan documents I found using Google were complex and intended for large organizations with very formal development and test processes. Although those test plans would be good for people with "test" in their job title, they do not apply well to sysadmins' more chaotic and time-dependent working conditions. As in most other aspects of the job, sysadmins need to be creative. So here is a short list of things to consider including in your test plan. Modify it to suit your needs:

This list should give you some ideas for creating your test plans. Most sysadmins should keep it simple and fairly informal.

Test early -- test often

I always start testing my shell scripts as soon as I complete the first portion that is executable. This is true whether I am writing a short command-line program or a script that is an executable file.

I usually start creating new programs with the shell script template. I write the code for the Help function and test it. This is usually a trivial part of the process, but it helps me get started and ensures that things in the template are working properly at the outset. At this point, it is easy to fix problems with the template portions of the script or to modify it to meet needs that the standard template does not.

Once the template and Help function are working, I move on to creating the body of the program by adding comments to document the programming steps required to meet the program specifications. Now I start adding code to meet the requirements stated in each comment. This code will probably require adding variables that are initialized in that section of the template -- which is now becoming a shell script.

This is where testing is more than just entering data and verifying the results. It takes a bit of extra work. Sometimes I add a command that simply prints the intermediate result of the code I just wrote and verify that. For more complex scripts, I add a -t option for "test mode." In this case, the internal test code executes only when the -t option is entered on the command line.

Final testing

After the code is complete, I go back to do a complete test of all the features and functions using known inputs to produce specific outputs. I also test some random inputs to see if the program can handle unexpected input.

Final testing is intended to verify that the program is functioning essentially as intended. A large part of the final test is to ensure that functions that worked earlier in the development cycle have not been broken by code that was added or changed later in the cycle.

If you have been testing the script as you add new code to it, you may think there should not be any surprises during the final test. Wrong! There are always surprises during final testing. Always. Expect those surprises, and be ready to spend time fixing them. If there were never any bugs discovered during final testing, there would be no point in doing a final test, would there?

Testing in production

Huh -- what?

"Not until a program has been in production for at least six months will the most harmful error be discovered."
-- Troutman's Programming Postulates

Yes, testing in production is now considered normal and desirable. Having been a tester myself, this seems reasonable. "But wait! That's dangerous," you say. My experience is that it is no more dangerous than extensive and rigorous testing in a dedicated test environment. In some cases, there is no choice because there is no test environment -- only production.

Sysadmins are no strangers to the need to test new or revised scripts in production. Anytime a script is moved into production, that becomes the ultimate test. The production environment constitutes the most critical part of that test. Nothing that testers can dream up in a test environment can fully replicate the true production environment.

The allegedly new practice of testing in production is just the recognition of what sysadmins have known all along. The best test is production -- so long as it is not the only test.

Fuzzy testing

This is another of those buzzwords that initially caused me to roll my eyes. Its essential meaning is simple: have someone bang on the keys until something happens, and see how well the program handles it. But there really is more to it than that.

Fuzzy testing is a bit like the time my son broke the code for a game in less than a minute with random input. That pretty much ended my attempts to write games for him.

Most test plans utilize very specific input that generates a specific result or output. Regardless of whether the test defines a positive or negative outcome as a success, it is still controlled, and the inputs and results are specified and expected, such as a specific error message for a specific failure mode.

Fuzzy testing is about dealing with randomness in all aspects of the test, such as starting conditions, very random and unexpected input, random combinations of options selected, low memory, high levels of CPU contending with other programs, multiple instances of the program under test, and any other random conditions that you can think of to apply to the tests.

I try to do some fuzzy testing from the beginning. If the Bash script cannot deal with significant randomness in its very early stages, then it is unlikely to get better as you add more code. This is a good time to catch these problems and fix them while the code is relatively simple. A bit of fuzzy testing at each stage is also useful in locating problems before they get masked by even more code.

After the code is completed, I like to do some more extensive fuzzy testing. Always do some fuzzy testing. I have certainly been surprised by some of the results. It is easy to test for the expected things, but users do not usually do the expected things with a script.

Previews of coming attractions

This article accomplished a little in the way of creating a template, but it mostly talked about testing. This is because testing is a critical part of creating any kind of program. In the next article in this series, you will add a basic Help function along with some code to detect and act on options, such as -h , to your Bash script template.

Resources

This series of articles is partially based on Volume 2, Chapter 10 of David Both's three-part Linux self-study course, Using and Administering Linux -- Zero to SysAdmin .

[Jul 12, 2020] Testing Bash with BATS by Darin London

Feb 21, 2019 | opensource.com

3 comments

Software developers writing applications in languages such as Java, Ruby, and Python have sophisticated libraries to help them maintain their software's integrity over time. They create tests that run applications through a series of executions in structured environments to ensure all of their software's aspects work as expected.

These tests are even more powerful when they're automated in a continuous integration (CI) system, where every push to the source repository causes the tests to run, and developers are immediately notified when tests fail. This fast feedback increases developers' confidence in the functional integrity of their applications.

The Bash Automated Testing System ( BATS ) enables developers writing Bash scripts and libraries to apply the same practices used by Java, Ruby, Python, and other developers to their Bash code.

Installing BATS

The BATS GitHub page includes installation instructions. There are two BATS helper libraries that provide more powerful assertions or allow overrides to the Test Anything Protocol ( TAP ) output format used by BATS. These can be installed in a standard location and sourced by all scripts. It may be more convenient to include a complete version of BATS and its helper libraries in the Git repository for each set of scripts or libraries being tested. This can be accomplished using the git submodule system.

The following commands will install BATS and its helper libraries into the test directory in a Git repository.

git submodule init
git submodule add https: // github.com / sstephenson / bats test / libs / bats
git submodule add https: // github.com / ztombol / bats-assert test / libs / bats-assert
git submodule add https: // github.com / ztombol / bats-support test / libs / bats-support
git add .
git commit -m 'installed bats'

To clone a Git repository and install its submodules at the same time, use the
--recurse-submodules flag to git clone .

Each BATS test script must be executed by the bats executable. If you installed BATS into your source code repo's test/libs directory, you can invoke the test with:

./test/libs/bats/bin/bats <path to test script>

Alternatively, add the following to the beginning of each of your BATS test scripts:

#!/usr/bin/env ./test/libs/bats/bin/bats
load 'libs/bats-support/load'
load 'libs/bats-assert/load'

and chmod +x <path to test script> . This will a) make them executable with the BATS installed in ./test/libs/bats and b) include these helper libraries. BATS test scripts are typically stored in the test directory and named for the script being tested, but with the .bats extension. For example, a BATS script that tests bin/build should be called test/build.bats .

You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., ./test/lib/bats/bin/bats test/*.bats .

Organizing libraries and scripts for BATS coverage

Bash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing.

For example, build.sh is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions.

It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can source the library in your BATS test and run the functions as you would any other command to test them.

Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like run_main . Then, add the following to the end of the script:

if [[ " ${BASH_SOURCE[0]} " == " ${0} " ]]
then
run_main
fi

This bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with source . This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is build.sh refactored for better BATS testability .

Writing and running tests

As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive @test blocks that describe the unit of the application being tested. Each @test block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the bats , bats-assert , and bats-support libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block:

@ test "requires CI_COMMIT_REF_SLUG environment variable" {
unset CI_COMMIT_REF_SLUG
assert_empty " ${CI_COMMIT_REF_SLUG} "
run some_command
assert_failure
assert_output --partial "CI_COMMIT_REF_SLUG"
}

If a BATS script includes setup and/or teardown functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. Build.bats is a full BATS test of our newly formatted build.sh script. (The mock_docker command in this test will be explained below, in the section on mocking/stubbing.)

When the test script runs, BATS uses exec to run each @test block as a separate subprocess. This makes it possible to export environment variables and even functions in one @test without affecting other @test s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the CI_COMMIT_REF_SLUG test block when it fails:

✗ requires CI_COMMIT_REF_SLUG environment variable
( from function ` assert_output ' in file test/libs/bats-assert/src/assert.bash, line 231,
in test file test/ci_deploy.bats, line 26)
`assert_output --partial "CI_COMMIT_REF_SLUG"' failed

-- output does not contain substring --
substring ( 1 lines ) :
CI_COMMIT_REF_SLUG
output ( 3 lines ) :
. / bin / deploy.sh: join_string_by: command not found
oc error
Could not login
--

** Did not delete , as test failed **

1 test , 1 failure

Here is the output of a successful test:

✓ requires CI_COMMIT_REF_SLUG environment variable
Helpers

Like any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as bats-assert and bats-support , can even be tested with BATS.

Libraries can be placed in the same test directory as the BATS scripts or in the test/libs directory if the number of files in the test directory gets unwieldy. BATS provides the load function that takes a path to a Bash file relative to the script being tested (e.g., test , in our case) and sources that file. Files must end with the prefix .bash , but the path to the file passed to the load function can't include the prefix. build.bats loads the bats-assert and bats-support libraries, a small helpers.bash library, and a docker_mock.bash library (described below) with the following code placed at the beginning of the test script below the interpreter magic line:

load 'libs/bats-support/load'
load 'libs/bats-assert/load'
load 'helpers'
load 'docker_mock' Stubbing test input and mocking external calls

The majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( stdout , stderr ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great discussion of mocking and stubbing in Ruby RSpec, which applies to any testing system.

The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash export command with the -f flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the cat executable:

function cat () { echo "THIS WOULD CAT ${*} " }
export -f cat

This method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from build.bats that mocks the raise function described in build.sh to ensure a specific error message is raised by the login fuction:

@ test ".login raises on oc error" {
source ${profile_script}
function raise () { echo " ${1} raised" ; }
export -f raise
run login
assert_failure
assert_output -p "Could not login raised"
}

Normally, it is not necessary to unset a stub/mock function after the test, since export only affects the current subprocess during the exec of the current @test block. However, it is possible to mock/stub commands (e.g. cat , sed , etc.) that the BATS assert * functions use internally. These mock/stub functions must be unset before these assert commands are run, or they will not work properly. Here is an example from build.bats that mocks sed , runs the build_deployable function, and unsets sed before running any assertions:

@ test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
local expected_dockerfile = 'Dockerfile.production'
local application = 'application'
local environment = 'environment'
local expected_original_base_image = " ${application} "
local expected_candidate_image = " ${application} -candidate: ${environment} "
local expected_deployable_image = " ${application} : ${environment} "
source ${profile_script}
mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t " ${expected_deployable_image} " -
function publish_image () { echo "publish_image ${*} " ; }
export -f publish_image
function sed () {
echo "sed ${*} " >& 2 ;
echo "FROM application-candidate:environment" ;
}
export -f sed
run build_deployable " ${application} " " ${environment} "
assert_success
unset sed
assert_output --regexp "sed.* ${expected_dockerfile} "
assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image} "
assert_output -p "FROM ${expected_candidate_image} piped"
assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
assert_output -p "publish_image ${expected_deployable_image} "
}

Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions:

Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries.

A good example is docker_mock.bash . It is loaded into build.bats and used in any test block that tests a function that calls the Docker executable. A typical test block using docker_mock looks like:

@ test ".publish_image fails if docker push fails" {
setup_publish
local expected_image = "image"
local expected_publishable_image = " ${CI_REGISTRY_IMAGE} / ${expected_image} "
source ${profile_script}
mock_docker tag " ${expected_image} " " ${expected_publishable_image} "
mock_docker push " ${expected_publishable_image} " and_fail
run publish_image " ${expected_image} "
assert_failure
assert_output -p "tagging ${expected_image} as ${expected_publishable_image} "
assert_output -p "tag ${expected_image} ${expected_publishable_image} "
assert_output -p "pushing image to gitlab registry"
assert_output -p "push ${expected_publishable_image} "
}

This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker.

One aspect of BATS introduced by mock_docker.bash is the ${BATS_TMPDIR} environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The mock_docker.bash library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory.

One note of caution regarding mocking/stubbing: The build.bats test consciously violates a dictum of testing that states: Don't mock what you don't own! This dictum demands that calls to commands that the test's developer didn't write, like docker , cat , sed , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands.

This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested build.sh script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision.

Conclusion

Introducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries.

In general, scripts and libraries that meet one or more of the following should be tested with BATS:

Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments.

Acknowledgment: I am indebted to Darrin Mann for introducing me to BATS testing.

[Jul 12, 2020] 6 handy Bash scripts for Git - Opensource.com

Jul 12, 2020 | opensource.com

6 handy Bash scripts for Git These six Bash scripts will make your life easier when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86 up 2 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More on Git

I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want. 1. gitlog

gitlog prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^ , 2 for HEAD~2, and so forth. For example:

$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

If I want to see what patches are on a different branch, I can specify an alternate branch:

$ gitlog recovery24
2. gitlog.id

gitlog.id just prints the patch SHA1 IDs:

$ gitlog.id
-----------------------[ recovery25 ]-----------------------
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9

Again, it assumes the current branch, but I can specify a different branch if I want.

3. gitlog.id2

gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:

$ # create a new branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep

gitlog.grep greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function inode_go_sync , I simply do:

$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10 to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary adjustments.

5. gitbranchcmp3

gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool Kompare , which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints [SAME] . If there are only comment differences, it prints [same] (in lower case). For example:

$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches

(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time

Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find

Finally, I have gitlog.find , a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:

$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh

The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:

Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The gitlog.find script is designed to locate GFS2 and DLM patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.

Source code

Here is the source for these scripts.

1. gitlog #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done

if [[ $branch =~ . * for-next. * ]]
then
start =HEAD
# start=origin/for-next
else
start =origin / master
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s %n" $i
patches =$ ( echo $patches - 1 | bc )
done
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch 2. gitlog.id #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 3. gitlog.id2 #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 4. gitlog.grep #!/bin/bash
param1 = $1
param2 = $2

if test "x $param2 " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
string = $param1
else
branch = $param1
string = $param2
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
/ usr / bin / git show --pretty =email --patch-with-stat $i | grep -n " $string "
patches =$ ( echo $patches - 1 | bc )
done 5. gitbranchcmp3 #!/bin/bash
#
# gitbranchcmp3 <old branch> [<new_branch>]
#
oldbranch = $1
newbranch = $2
script = / tmp / compare_mismatches.sh

/ usr / bin / rm -f $script
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

if test "x $newbranch " = x; then
newbranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

declare -a oldsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $oldbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )
declare -a newsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $newbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )

#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done

#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo

for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done

echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done

echo "The missing: " $the_missing
echo "Compare script generated at: $script "
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' ' 6. gitlog.find #!/bin/bash
#
# Find the upstream equivalent patch
#
# gitlog.find
#
cwd = $PWD
param1 = $1
ubranch = $2
patches = 0
script = / tmp / compare_upstream.sh
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " cwd= $PWD " >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " cd ~/linux.git/fs/gfs2" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " cd $cwd " >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

#echo "Gathering upstream patch info. Please wait."
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

cd ~ / linux.git
if test "X ${ubranch} " = "X" ; then
ubranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
utracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
#
# gather a list of gfs2 patches from master just in case we can't find it
#
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / gfs2 / > / tmp / gronk.gfs2
# ms = in Linus's master
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / dlm / > / tmp / gronk.dlm

cd $cwd
LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
desc = `/ usr / bin / git show --abbrev-commit -s --pretty =format: "%s" $i `
cd ~ / linux.git
cmp = 1
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "lo %h %<|(32)%an %s" $utracking .. $ubranch | grep " $desc " `
# lo = in local for-next
if test "X $up_eq " = "X" ; then
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "fn %h %<|(32)%an %s" master.. $utracking | grep " $desc " `
# fn = in for-next for next merge window
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.gfs2 `
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.dlm `
if test "X $up_eq " = "X" ; then
up_eq = " Not found upstream"
cmp = 0
fi
fi
fi
fi
echo " $up_eq "
if [ $cmp -eq 1 ] ; then
UP_SHA1 = ` echo $up_eq | cut -d ' ' -f2 `
echo "compare_them $UP_SHA1 $i " >> $script
fi
cd $cwd
patches =$ ( echo $patches - 1 | bc )
done
echo "Compare script generated: $script "

[Jul 12, 2020] How to add a Help facility to your Bash program - Opensource.com

Jul 12, 2020 | opensource.com

How to add a Help facility to your Bash program In the third article in this series, learn about using functions as you create a simple Help facility for your Bash script. 20 Dec 2019 David Both (Correspondent) Feed 53 up Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs. In the second article , you began the task of creating a fairly simple template that you can use as a starting point for other Bash programs, then explored ways to test it.

This third of the four articles in this series explains how to create and use a simple Help function. While creating your Help facility, you will also learn about using functions and how to handle command-line options such as -h .

Why Help? More on Bash Even fairly simple Bash programs should have some sort of Help facility, even if it is fairly rudimentary. Many of the Bash shell programs I write are used so infrequently that I forget the exact syntax of the command I need. Others are so complex that I need to review the options and arguments even when I use them frequently.

Having a built-in Help function allows you to view those things without having to inspect the code itself. A good and complete Help facility is also a part of program documentation.

About functions

Shell functions are lists of Bash program statements that are stored in the shell's environment and can be executed, like any other command, by typing their name at the command line. Shell functions may also be known as procedures or subroutines, depending upon which other programming language you are using.

Functions are called in scripts or from the command-line interface (CLI) by using their names, just as you would for any other command. In a CLI program or a script, the commands in the function execute when they are called, then the program flow sequence returns to the calling entity, and the next series of program statements in that entity executes.

The syntax of a function is:

FunctionName(){program statements}

Explore this by creating a simple function at the CLI. (The function is stored in the shell environment for the shell instance in which it is created.) You are going to create a function called hw , which stands for "hello world." Enter the following code at the CLI and press Enter . Then enter hw as you would any other shell command:

[ student @ testvm1 ~ ] $ hw (){ echo "Hi there kiddo" ; }
[ student @ testvm1 ~ ] $ hw
Hi there kiddo
[ student @ testvm1 ~ ] $

OK, so I am a little tired of the standard "Hello world" starter. Now, list all of the currently defined functions. There are a lot of them, so I am showing just the new hw function. When it is called from the command line or within a program, a function performs its programmed task and then exits and returns control to the calling entity, the command line, or the next Bash program statement in a script after the calling statement:

[ student @ testvm1 ~ ] $ declare -f | less
< snip >
hw ()
{
echo "Hi there kiddo"
}
< snip >

Remove that function because you do not need it anymore. You can do that with the unset command:

[ student @ testvm1 ~ ] $ unset -f hw ; hw
bash: hw: command not found
[ student @ testvm1 ~ ] $ Creating the Help function

Open the hello program in an editor and add the Help function below to the hello program code after the copyright statement but before the echo "Hello world!" statement. This Help function will display a short description of the program, a syntax diagram, and short descriptions of the available options. Add a call to the Help function to test it and some comment lines that provide a visual demarcation between the functions and the main portion of the program:

################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################

Help
echo "Hello world!"

The options described in this Help function are typical for the programs I write, although none are in the code yet. Run the program to test it:

[ student @ testvm1 ~ ] $ . / hello
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

Hello world !
[ student @ testvm1 ~ ] $

Because you have not added any logic to display Help only when you need it, the program will always display the Help. Since the function is working correctly, read on to add some logic to display the Help only when the -h option is used when you invoke the program at the command line.

Handling options

A Bash script's ability to handle command-line options such as -h gives some powerful capabilities to direct the program and modify what it does. In the case of the -h option, you want the program to print the Help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command (see How to program with Bash: Loops to learn more about while ) in conjunction with the getops and case commands.

The getops command reads any and all options specified at the command line and creates a list of those options. In the code below, the while command loops through the list of options by setting the variable $options for each. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to evaluate the list of options until they have all been processed or it encounters an exit statement, which terminates the program.

Be sure to delete the Help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this:

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
esac
done

echo "Hello world!"

Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option added to this case statement to delineate the end of each option.

Testing

Testing is now a little more complex. You need to test your program with a number of different options -- and no options -- to see how it responds. First, test with no options to ensure that it prints "Hello world!" as it should:

[ student @ testvm1 ~ ] $ . / hello
Hello world !

That works, so now test the logic that displays the Help text:

[ student @ testvm1 ~ ] $ . / hello -h
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

That works as expected, so try some testing to see what happens when you enter some unexpected options:

[ student @ testvm1 ~ ] $ . / hello -x
Hello world !
[ student @ testvm1 ~ ] $ . / hello -q
Hello world !
[ student @ testvm1 ~ ] $ . / hello -lkjsahdf
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

[ student @ testvm1 ~ ] $

The program just ignores any options without specific responses without generating any errors. But notice the last entry (with -lkjsahdf for options): because there is an h in the list of options, the program recognizes it and prints the Help text. This testing has shown that the program doesn't have the ability to handle incorrect input and terminate the program if any is detected.

You can add another case stanza to the case statement to match any option that doesn't have an explicit match. This general case will match anything you have not provided a specific match for. The case statement now looks like this, with the catch-all match of \? as the last case. Any additional specific cases must precede this final one:

while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done

Test the program again using the same options as before and see how it works now.

Where you are

You have accomplished a good amount in this article by adding the capability to process command-line options and a Help procedure. Your Bash script now looks like this:

#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# LinuxGeek46@both.org #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################

################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|t|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done

echo "Hello world!"

Be sure to test this version of the program very thoroughly. Use random inputs and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.

Next time

In this article, you added a Help function as well as the ability to process command-line options to display it selectively. The program is getting a little more complex, so testing is becoming more important and requires more test paths in order to be complete.

The next article will look at initializing variables and doing a bit of sanity checking to ensure that the program will run under the correct set of conditions.

[Jul 12, 2020] Navigating the Bash shell with pushd and popd - Opensource.com

Notable quotes:
"... directory stack ..."
Jul 12, 2020 | opensource.com

Navigating the Bash shell with pushd and popd Pushd and popd are the fastest navigational commands you've never heard of. 07 Aug 2019 Seth Kenlon (Red Hat) Feed 71 up 7 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

The pushd and popd commands are built-in features of the Bash shell to help you "bookmark" directories for quick navigation between locations on your hard drive. You might already feel that the terminal is an impossibly fast way to navigate your computer; in just a few key presses, you can go anywhere on your hard drive, attached storage, or network share. But that speed can break down when you find yourself going back and forth between directories, or when you get "lost" within your filesystem. Those are precisely the problems pushd and popd can help you solve.

pushd

At its most basic, pushd is a lot like cd . It takes you from one directory to another. Assume you have a directory called one , which contains a subdirectory called two , which contains a subdirectory called three , and so on. If your current working directory is one , then you can move to two or three or anywhere with the cd command:

$ pwd
one
$ cd two / three
$ pwd
three

You can do the same with pushd :

$ pwd
one
$ pushd two / three
~ / one / two / three ~ / one
$ pwd
three

The end result of pushd is the same as cd , but there's an additional intermediate result: pushd echos your destination directory and your point of origin. This is your directory stack , and it is what makes pushd unique.

Stacks

A stack, in computer terminology, refers to a collection of elements. In the context of this command, the elements are directories you have recently visited by using the pushd command. You can think of it as a history or a breadcrumb trail.

You can move all over your filesystem with pushd ; each time, your previous and new locations are added to the stack:

$ pushd four
~ / one / two / three / four ~ / one / two / three ~ / one
$ pushd five
~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three ~ / one Navigating the stack

Once you've built up a stack, you can use it as a collection of bookmarks or fast-travel waypoints. For instance, assume that during a session you're doing a lot of work within the ~/one/two/three/four/five directory structure of this example. You know you've been to one recently, but you can't remember where it's located in your pushd stack. You can view your stack with the +0 (that's a plus sign followed by a zero) argument, which tells pushd not to change to any directory in your stack, but also prompts pushd to echo your current stack:

$ pushd + 0
~ / one / two / three / four ~ / one / two / three ~ / one ~ / one / two / three / four / five

Alternatively, you can view the stack with the dirs command, and you can see the index number for each directory by using the -v option:

$ dirs -v
0 ~ / one / two / three / four
1 ~ / one / two / three
2 ~ / one
3 ~ / one / two / three / four / five

The first entry in your stack is your current location. You can confirm that with pwd as usual:

$ pwd
~ / one / two / three / four

Starting at 0 (your current location and the first entry of your stack), the second element in your stack is ~/one , which is your desired destination. You can move forward in your stack using the +2 option:

$ pushd + 2
~ / one ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three
$ pwd
~ / one

This changes your working directory to ~/one and also has shifted the stack so that your new location is at the front.

You can also move backward in your stack. For instance, to quickly get to ~/one/two/three given the example output, you can move back by one, keeping in mind that pushd starts with 0:

$ pushd -0
~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four Adding to the stack

You can continue to navigate your stack in this way, and it will remain a static listing of your recently visited directories. If you want to add a directory, just provide the directory's path. If a directory is new to the stack, it's added to the list just as you'd expect:

$ pushd / tmp
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

But if it already exists in the stack, it's added a second time:

$ pushd ~ / one
~ / one / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

While the stack is often used as a list of directories you want quick access to, it is really a true history of where you've been. If you don't want a directory added redundantly to the stack, you must use the +N and -N notation.

Removing directories from the stack

Your stack is, obviously, not immutable. You can add to it with pushd or remove items from it with popd .

For instance, assume you have just used pushd to add ~/one to your stack, making ~/one your current working directory. To remove the first (or "zeroeth," if you prefer) element:

$ pwd
~ / one
$ popd + 0
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd
~ / one

Of course, you can remove any element, starting your count at 0:

$ pwd ~ / one
$ popd + 2
/ tmp ~ / one / two / three ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd ~ / one

You can also use popd from the back of your stack, again starting with 0. For example, to remove the final directory from your stack:

$ popd -0
/ tmp ~ / one / two / three ~ / one / two / three / four / five

When used like this, popd does not change your working directory. It only manipulates your stack.

Navigating with popd

The default behavior of popd , given no arguments, is to remove the first (zeroeth) item from your stack and make the next item your current working directory.

This is most useful as a quick-change command, when you are, for instance, working in two different directories and just need to duck away for a moment to some other location. You don't have to think about your directory stack if you don't need an elaborate history:

$ pwd
~ / one
$ pushd ~ / one / two / three / four / five
$ popd
$ pwd
~ / one

You're also not required to use pushd and popd in rapid succession. If you use pushd to visit a different location, then get distracted for three hours chasing down a bug or doing research, you'll find your directory stack patiently waiting (unless you've ended your terminal session):

$ pwd ~ / one
$ pushd / tmp
$ cd { / etc, / var, / usr } ; sleep 2001
[ ... ]
$ popd
$ pwd
~ / one Pushd and popd in the real world

The pushd and popd commands are surprisingly useful. Once you learn them, you'll find excuses to put them to good use, and you'll get familiar with the concept of the directory stack. Getting comfortable with pushd was what helped me understand git stash , which is entirely unrelated to pushd but similar in conceptual intangibility.

Using pushd and popd in shell scripts can be tempting, but generally, it's probably best to avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're re-reading a script ( pushd +3 is less clear than cd $HOME/$DIR/$TMP or similar).

Aside from these warnings, if you're a regular Bash or Zsh user, then you can and should try pushd and popd . Bash prompt tips and tricks Here are a few hidden treasures you can use to customize your Bash prompt. Dave Neary (Red Hat) Topics Bash Linux Command line About the author Seth Kenlon - Seth Kenlon is an independent multimedia artist, free culture advocate, and UNIX geek. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project, http://slackermedia.info More about me Recommended reading
Add videos as wallpaper on your Linux desktop

Use systemd timers instead of cronjobs

Why I stick with xterm

Customizing my Linux terminal with tmux and Git

Back up your phone's storage with this Linux utility

Read and write data from anywhere with redirection in the Linux terminal
7 Comments


matt on 07 Aug 2019

Thank you for the write up for pushd and popd. I gotta remember to use these when I'm jumping around directories a lot. I got a hung up on a pushd example because my development work using arrays differentiates between the index and the count. In my experience, a zero-based array of A, B, C; C has an index of 2 and also is the third element. C would not be considered the second element cause that would be confusing it's index and it's count.

Seth Kenlon on 07 Aug 2019

Interesting point, Matt. The difference between count and index had not occurred to me, but I'll try to internalise it. It's a great distinction, so thanks for bringing it up!

Greg Pittman on 07 Aug 2019

This looks like a recipe for confusing myself.

Seth Kenlon on 07 Aug 2019

It can be, but start out simple: use pushd to change to one directory, and then use popd to go back to the original. Sort of a single-use bookmark system.

Then, once you're comfortable with pushd and popd, branch out and delve into the stack.

A tcsh shell I used at an old job didn't have pushd and popd, so I used to have functions in my .cshrc to mimic just the back-and-forth use.

Jake on 07 Aug 2019

"dirs" can be also used to view the stack. "dirs -v" helpfully numbers each directory with its index.

Seth Kenlon on 07 Aug 2019

Thanks for that tip, Jake. I arguably should have included that in the article, but I wanted to try to stay focused on just the two {push,pop}d commands. Didn't occur to me to casually mention one use of dirs as you have here, so I've added it for posterity.

There's so much in the Bash man and info pages to talk about!

other_Stu on 11 Aug 2019

I use "pushd ." (dot for current directory) quite often. Like a working directory bookmark when you are several subdirectories deep somewhere, and need to cd to couple of other places to do some work or check something.
And you can use the cd command with your DIRSTACK as well, thanks to tilde expansion.
cd ~+3 will take you to the same directory as pushd +3 would.

[Jul 12, 2020] An introduction to parameter expansion in Bash - Opensource.com

Jul 12, 2020 | opensource.com

An introduction to parameter expansion in Bash Get started with this quick how-to guide on expansion modifiers that transform Bash variables and other parameters into powerful tools beyond simple value stores. 13 Jun 2017 James Pannacciulli Feed 366 up 4 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

In Bash, entities that store values are known as parameters. Their values can be strings or arrays with regular syntax, or they can be integers or associative arrays when special attributes are set with the declare built-in. There are three types of parameters: positional parameters, special parameters, and variables.

More Linux resources

For the sake of brevity, this article will focus on a few classes of expansion methods available for string variables, though these methods apply equally to other types of parameters.

Variable assignment and unadulterated expansion

When assigning a variable, its name must be comprised solely of alphanumeric and underscore characters, and it may not begin with a numeral. There may be no spaces around the equal sign; the name must immediately precede it and the value immediately follow:

$ variable_1="my content"

Storing a value in a variable is only useful if we recall that value later; in Bash, substituting a parameter reference with its value is called expansion. To expand a parameter, simply precede the name with the $ character, optionally enclosing the name in braces:

$ echo $variable_1 ${variable_1} my content my content

Crucially, as shown in the above example, expansion occurs before the command is called, so the command never sees the variable name, only the text passed to it as an argument that resulted from the expansion. Furthermore, parameter expansion occurs before word splitting; if the result of expansion contains spaces, the expansion should be quoted to preserve parameter integrity, if desired:

$ printf "%s\n" ${variable_1} my content $ printf "%s\n" "${variable_1}" my content
Parameter expansion modifiers

Parameter expansion goes well beyond simple interpolation, however. Inside the braces of a parameter expansion, certain operators, along with their arguments, may be placed after the name, before the closing brace. These operators may invoke conditional, subset, substring, substitution, indirection, prefix listing, element counting, and case modification expansion methods, modifying the result of the expansion. With the exception of the reassignment operators ( = and := ), these operators only affect the expansion of the parameter without modifying the parameter's value for subsequent expansions.

About conditional, substring, and substitution parameter expansion operators Conditional parameter expansion

Conditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions -- each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.

parameter expansion unset var var="" var="gnu"
${var-default} default -- gnu
${var:-default} default default gnu
${var+alternate} -- alternate alternate
${var:+alternate} -- -- alternate
${var?error} error -- gnu
${var:?error} error error gnu

The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.

As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:

$ echo ${EDITOR} /usr/bin/vi $ echo ${EDITOR:-$(which nano)} /usr/bin/vi $ unset EDITOR $ echo ${EDITOR:-$(which nano)} /usr/bin/nano

Building on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:

$ ${EDITOR:-$(which nano)} ${OUT_FILE:?Missing filename} bash: OUT_FILE: Missing filename
Substring parameter expansion

Parameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:

$ location="CA 90095" $ echo "Zip Code: ${location:3}" Zip Code: 90095 $ echo "Zip Code: ${location:(-5)}" Zip Code: 90095 $ echo "State: ${location:0:2}" State: CA

Another way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and % operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.

var="open source"
parameter expansion offset of 5
length of 4
${var:offset} source
${var:offset:length} sour
pattern of *o?
${var#pattern} en source
${var##pattern} rce
pattern of ?e*
${var%pattern} open sour
${var%pattern} o

The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. [[:alnum:]] . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:

$ echo $PATH /usr/local/bin:/usr/bin:/bin $ echo "Lowest priority in PATH: ${PATH##*:}" Lowest priority in PATH: /bin $ echo "Everything except lowest priority: ${PATH%:*}" Everything except lowest priority: /usr/local/bin:/usr/bin $ echo "Highest priority in PATH: ${PATH%:*}" Highest priority in PATH: /usr/local/bin
Substitution in parameter expansion

The same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.

var="free and open"
parameter expansion pattern of [[:space:]]
string of _
${var/pattern/string} free_and open
${var//pattern/string} free_and_open

The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions. Topics Linux About the author James Pannacciulli - James Pannacciulli is an advocate for software freedom & user autonomy with an MA in Linguistics. Employed as a Systems Engineer in Los Angeles, in his free time he occasionally gives talks on bash usage at various conferences. James likes his beers sour and his nettles stinging. More from James may be found on his home page . He has presented at conferences including SCALE ,...

[Jul 12, 2020] A sysadmin's guide to Bash by Maxim Burgerhout

Jul 12, 2020 | opensource.com

Use aliases

... ... ...

Make your root prompt stand out

... ... ...

Control your history

You probably know that when you press the Up arrow key in Bash, you can see and reuse all (well, many) of your previous commands. That is because those commands have been saved to a file called .bash_history in your home directory. That history file comes with a bunch of settings and commands that can be very useful.

First, you can view your entire recent command history by typing history , or you can limit it to your last 30 commands by typing history 30 . But that's pretty vanilla. You have more control over what Bash saves and how it saves it.

For example, if you add the following to your .bashrc, any commands that start with a space will not be saved to the history list:

HISTCONTROL=ignorespace

This can be useful if you need to pass a password to a command in plaintext. (Yes, that is horrible, but it still happens.)

If you don't want a frequently executed command to show up in your history, use:

HISTCONTROL=ignorespace:erasedups

With this, every time you use a command, all its previous occurrences are removed from the history file, and only the last invocation is saved to your history list.

A history setting I particularly like is the HISTTIMEFORMAT setting. This will prepend all entries in your history file with a timestamp. For example, I use:

HISTTIMEFORMAT="%F %T  "

When I type history 5 , I get nice, complete information, like this:

1009 2018 -06- 11 22 : 34 : 38 cat / etc / hosts
1010 2018 -06- 11 22 : 34 : 40 echo $foo
1011 2018 -06- 11 22 : 34 : 42 echo $bar
1012 2018 -06- 11 22 : 34 : 44 ssh myhost
1013 2018 -06- 11 22 : 34 : 55 vim .bashrc

That makes it a lot easier to browse my command history and find the one I used two days ago to set up an SSH tunnel to my home lab (which I forget again, and again, and again ).

Best Bash practices

I'll wrap this up with my top 11 list of the best (or good, at least; I don't claim omniscience) practices when writing Bash scripts.

  1. Bash scripts can become complicated and comments are cheap. If you wonder whether to add a comment, add a comment. If you return after the weekend and have to spend time figuring out what you were trying to do last Friday, you forgot to add a comment.

  1. Wrap all your variable names in curly braces, like ${myvariable} . Making this a habit makes things like ${variable}_suffix possible and improves consistency throughout your scripts.
  1. Do not use backticks when evaluating an expression; use the $() syntax instead. So use:
    for  file in $(ls); do
    
    not
    for  file in `ls`; do
    
    The former option is nestable, more easily readable, and keeps the general sysadmin population happy. Do not use backticks.
  1. Consistency is good. Pick one style of doing things and stick with it throughout your script. Obviously, I would prefer if people picked the $() syntax over backticks and wrapped their variables in curly braces. I would prefer it if people used two or four spaces -- not tabs -- to indent, but even if you choose to do it wrong, do it wrong consistently.
  1. Use the proper shebang for a Bash script. As I'm writing Bash scripts with the intention of only executing them with Bash, I most often use #!/usr/bin/bash as my shebang. Do not use #!/bin/sh or #!/usr/bin/sh . Your script will execute, but it'll run in compatibility mode -- potentially with lots of unintended side effects. (Unless, of course, compatibility mode is what you want.)
  1. When comparing strings, it's a good idea to quote your variables in if-statements, because if your variable is empty, Bash will throw an error for lines like these: if [ ${myvar} == "foo" ] ; then
    echo "bar"
    fi And will evaluate to false for a line like this: if [ " ${myvar} " == "foo" ] ; then
    echo "bar"
    fi Also, if you are unsure about the contents of a variable (e.g., when you are parsing user input), quote your variables to prevent interpretation of some special characters and make sure the variable is considered a single word, even if it contains whitespace.
  1. This is a matter of taste, I guess, but I prefer using the double equals sign ( == ) even when comparing strings in Bash. It's a matter of consistency, and even though -- for string comparisons only -- a single equals sign will work, my mind immediately goes "single equals is an assignment operator!"
  1. Use proper exit codes. Make sure that if your script fails to do something, you present the user with a written failure message (preferably with a way to fix the problem) and send a non-zero exit code: # we have failed
    echo "Process has failed to complete, you need to manually restart the whatchamacallit"
    exit 1 This makes it easier to programmatically call your script from yet another script and verify its successful completion.
  1. Use Bash's built-in mechanisms to provide sane defaults for your variables or throw errors if variables you expect to be defined are not defined: # this sets the value of $myvar to redhat, and prints 'redhat'
    echo ${myvar:=redhat} # this throws an error reading 'The variable myvar is undefined, dear reader' if $myvar is undefined
    ${myvar:?The variable myvar is undefined, dear reader}
  1. Especially if you are writing a large script, and especially if you work on that large script with others, consider using the local keyword when defining variables inside functions. The local keyword will create a local variable, that is one that's visible only within that function. This limits the possibility of clashing variables.
  1. Every sysadmin must do it sometimes: debug something on a console, either a real one in a data center or a virtual one through a virtualization platform. If you have to debug a script that way, you will thank yourself for remembering this: Do not make the lines in your scripts too long!

    On many systems, the default width of a console is still 80 characters. If you need to debug a script on a console and that script has very long lines, you'll be a sad panda. Besides, a script with shorter lines -- the default is still 80 characters -- is a lot easier to read and understand in a normal editor, too!


I truly love Bash. I can spend hours writing about it or exchanging nice tricks with fellow enthusiasts. Make sure you drop your favorites in the comments!

[Jul 12, 2020] My favorite Bash hacks

Jan 09, 2020 | opensource.com

Get the highlights in your inbox every week.

When you work with computers all day, it's fantastic to find repeatable commands and tag them for easy use later on. They all sit there, tucked away in ~/.bashrc (or ~/.zshrc for Zsh users ), waiting to help improve your day!

In this article, I share some of my favorite of these helper commands for things I forget a lot, in hopes that they will save you, too, some heartache over time.

Say when it's over

When I'm using longer-running commands, I often multitask and then have to go back and check if the action has completed. But not anymore, with this helpful invocation of say (this is on MacOS; change for your local equivalent):

function looooooooong {
START=$(date +%s.%N)
$*
EXIT_CODE=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
result="$1 completed in $RES, exit code $EXIT_CODE."
echo -e "\n⏰ $result"
( say -r 250 $result 2>&1 > /dev/null & )
}

This command marks the start and end time of a command, calculates the minutes it takes, and speaks the command invoked, the time taken, and the exit code. I find this super helpful when a simple console bell just won't do.

... ... ...

There are many Docker commands, but there are even more docker compose commands. I used to forget the --rm flags, but not anymore with these useful aliases:

alias dc = "docker-compose"
alias dcr = "docker-compose run --rm"
alias dcb = "docker-compose run --rm --build" gcurl helper for Google Cloud

This one is relatively new to me, but it's heavily documented . gcurl is an alias to ensure you get all the correct flags when using local curl commands with authentication headers when working with Google Cloud APIs.

Git and ~/.gitignore

I work a lot in Git, so I have a special section dedicated to Git helpers.

One of my most useful helpers is one I use to clone GitHub repos. Instead of having to run:

git clone git@github.com:org/repo /Users/glasnt/git/org/repo

I set up a clone function:

clone(){
echo Cloning $1 to ~/git/$1
cd ~/git
git clone git@github.com:$1 $1
cd $1
}

... ... ...

[Jul 11, 2020] Own your own content Vallard's Blog

Jul 11, 2020 | benincosa.com

Posted on December 31, 2019 by Vallard

Reading this morning on Hacker News was this article on how the old Internet has died because we trusted all our content to Facebook and Google. While hyperbole abounds in the headline and there are plenty of internet things out there that aren't owned by Google nor Facebook (including this AWS free blog) it is true much of the information and content is in the hands of a giant Ad serving service and a social echo chamber. (well that is probably too harsh)

I heard this advice many years ago that you should own your own content. While there isn't much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make changes.

But of course, that is silly! Why not publish on Medium like everyone else? Or publish on someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to me, the point is freedom. Freedom to express, do what I want, say what I will with no restrictions. The ability to own what I say and freedom from others monetizing me directly. There's no walled garden and anyone can access the content I write in my own little funzone.

While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy. In the next decade, whether this blog remains up or is shut down, is not dependent upon the fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or not. If I change my views, I can delete it. It won't just sit on the Internet because someone else's terms of service agreement changed. I am in control, I am in charge. That to me is important and the reason I run this blog, don't use other people's services, and why I advocate for owning your own content.

[Jul 10, 2020] I-O reporting from the Linux command line by Tyler Carrigan

Jul 10, 2020 | www.redhat.com

I/O reporting from the Linux command line Learn the iostat tool, its common command-line flags and options, and how to use it to better understand input/output performance in Linux.

Posted: July 9, 2020 | by Tyler Carrigan (Red Hat)

Image

Image by Pexels

More Linux resources

If you have followed my posts here at Enable Sysadmin, you know that I previously worked as a storage support engineer. One of my many tasks in that role was to help customers replicate backups from their production environments to dedicated backup storage arrays. Many times, customers would contact me concerned about the speed of the data transfer from production to storage.

Now, if you have ever worked in support, you know that there can be many causes for a symptom. However, the throughput of a system can have huge implications for massive data transfers. If all is well, we are talking hours, if not... I have seen a single replication job take months.

We know that Linux is loaded full of helpful tools for all manner of issues. For input/output monitoring, we use the iostat command. iostat is a part of the sysstat package and is not loaded on all distributions by default.

Installation and base run

I am using Red Hat Enterprise Linux 8 here and have included the install output below.

[ Want to try out Red Hat Enterprise Linux? Download it now for free. ]

NOTE : the command runs automatically after installation.

[root@rhel ~]# iostat
bash: iostat: command not found...
Install package 'sysstat' to provide command 'iostat'? [N/y] y
    
    
 * Waiting in queue... 
The following packages have to be installed:
lm_sensors-libs-3.4.0-21.20180522git70f7e08.el8.x86_64    Lm_sensors core libraries
sysstat-11.7.3-2.el8.x86_64    Collection of performance monitoring tools for Linux
Proceed with changes? [N/y] y
    
    
 * Waiting in queue... 
 * Waiting for authentication... 
 * Waiting in queue... 
 * Downloading packages... 
 * Requesting data... 
 * Testing changes... 
 * Installing packages... 
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.17    0.05    4.09    0.65    0.00   83.03
    
Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             206.70      8014.01      1411.92    1224862     215798
sdc               0.69        20.39         0.00       3116          0
sdb               0.69        20.39         0.00       3116          0
dm-0            215.54      7917.78      1449.15    1210154     221488
dm-1              0.64        14.52         0.00       2220          0

If you run the base command without options, iostat displays CPU usage information. It also displays I/O stats for each partition on the system. The output includes totals, as well as per second values for both read and write operations. Also, note that the tps field is the total number of Transfers per second issued to a specific device.

The practical application is this: if you know what hardware is used, then you know what parameters it should be operating within. Once you combine this knowledge with the output of iostat , you can make changes to your system accordingly.

Interval runs

It can be useful in troubleshooting or data gathering phases to have a report run at a given interval. To do this, run the command with the interval (in seconds) at the end:

[root@rhel ~]# iostat -m 10
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.94    0.05    0.35    0.04    0.00   98.62
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              12.18         0.44         0.12       1212        323
sdc               0.04         0.00         0.00          3          0
sdb               0.04         0.00         0.00          3          0
dm-0             12.79         0.43         0.12       1197        329
dm-1              0.04         0.00         0.00          2          0
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.24    0.00    0.15    0.00    0.00   99.61
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.00         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0
dm-0              0.00         0.00         0.00          0          0
dm-1              0.00         0.00         0.00          0          0
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.20    0.00    0.18    0.00    0.00   99.62
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.50         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0
dm-0              0.50         0.00         0.00          0          0
dm-1              0.00         0.00         0.00          0          0

The above output is from a 30-second run.

You must use Ctrl + C to exit the run.

Easy reading

To clean up the output and make it easier to digest, use the following options:

-m changes the output to megabytes, which is a bit easier to read and is usually better understood by customers or managers.

[root@rhel ~]# iostat -m
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.51    0.09    0.55    0.07    0.00   97.77
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              22.23         0.81         0.21       1211        322
sdc               0.07         0.00         0.00          3          0
sdb               0.07         0.00         0.00          3          0
dm-0             23.34         0.80         0.22       1197        328
dm-1              0.07         0.00         0.00          2          0

-p allows you to specify a particular device to focus in on. You can combine this option with the -m for a nice and tidy look at a particularly concerning device and its partitions.

[root@rhel ~]# iostat -m -p sda
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.19    0.07    0.45    0.06    0.00   98.24
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              17.27         0.63         0.17       1211        322
sda2             16.83         0.62         0.17       1202        320
sda1              0.10         0.00         0.00          7          2
Advanced stats

If the default values just aren't getting you the information you need, you can use the -x flag to view extended statistics:

[root@rhel ~]# iostat -m -p sda -x 
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.06    0.06    0.40    0.05    0.00   98.43
    
Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda             12.20    2.83      0.54      0.14     0.02     0.92   0.16  24.64    0.55    0.50   0.00    45.58    52.37   0.46   0.69
sda2            12.10    2.54      0.54      0.14     0.02     0.92   0.16  26.64    0.55    0.47   0.00    45.60    57.88   0.47   0.68
sda1             0.08    0.01      0.00      0.00     0.00     0.00   0.00  23.53    0.44    1.00   0.00    43.00   161.08   0.57   0.00

Some of the options to pay attention to here are:

There are other values present, but these are the ones to look out for.

Shutting down

This article covers just about everything you need to get started with iostat . If you have other questions or need further explanations of options, be sure to check out the man page or your preferred search engine. For other Linux tips and tricks, keep an eye on Enable Sysadmin!

[Jul 09, 2020] Bash Shortcuts Gem by Ian Miell

Jul 09, 2020 | zwischenzugs.com

TL;DR

These commands can tell you what key bindings you have in your bash shell by default.

bind -P | grep 'can be'
stty -a | grep ' = ..;'
Background

I'd aways wondered what key strokes did what in bash – I'd picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn't remember many of them anyway.

Then debugging a problem tab completion in 'here' documents, I stumbled across bind.

bind and stty

'bind' is a bash builtin, which means it's not a program like awk or grep, but is picked up and handled by the bash program itself.

It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).

Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:

stty -a | grep ' = ..;'

These take precedence and can be confusing if you've tried to bind the same thing in your shell! Further confusion is caused by the fact that '^D' means 'CTRL and d pressed together whereas in bind output, it would be 'C-d'.

edit: am indebted to joepvd from hackernews for this beauty

    $ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/'
    intr = ^C
    quit = ^
    erase = ^?
    kill = ^U
    eof = ^D
    swtch = ^Z
    susp = ^Z
    rprnt = ^R
    werase = ^W
    lnext = ^V
    flush = ^O
Breaking Down the Command
bind -P | grep can

Can be considered (almost) equivalent to a more instructive command:

bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can

'bind -l' lists all the available keystroke functions. For example, 'complete' is the auto-complete function normally triggered by hitting 'tab' twice. The output of this is passed to a sed command which passes each function name to 'bind -q', which queries the bindings.

sed 's/.*/bind -q /'

The output of this is passed for running into /bin/bash.

/bin/bash 2>&1 | grep -v warning: | grep 'can be'

Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.

The '2>&1' puts the error output (the warnings) to the same output channel, filtering out warnings with a 'grep -v' and then filtering on output that describes how to trigger the function.

In the output of bind -q, 'C-' means 'the ctrl key and'. So 'C-c' is the normal. Similarly, 't' means 'escape', so 'tt' means 'autocomplete', and 'e' means escape:

$ bind -q complete
complete can be invoked via "C-i", "ee".

and is also bound to 'C-i' (though on my machine I appear to need to press it twice – not sure why).

Add to bashrc

I added this alias as 'binds' in my bashrc so I could easily get hold of this list in the future.

alias binds="bind -P | grep 'can be'"

Now whenever I forget a binding, I type 'binds', and have a read :)

[adinserter block="1″]

The Zinger

Browsing through the bash manual, I noticed that an option to bind enables binding to

-x keyseq:shell-command

So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):

bind -x '"C-xC-o":bind -P | grep can'

Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues

Now I'm going to sort through my history to see what I type most often :)

This post is based on material from Docker in Practice , available on Manning's Early Access Program. Get 39% off with the code: 39miell

[Jul 09, 2020] My Favourite Secret Weapon strace

Jul 09, 2020 | zwischenzugs.com

Why strace ?

I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.

My favourite secret weapon in this path to understanding is strace.

strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS) calls your program is making.

An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.

Usage Patterns

strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.

My Netcat Server Won't Start!

Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.

Simply running through strace will likely give you clues as to what's gone on.

$  nc -l localhost 80
nc: Permission denied

Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).

Simply put strace at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):

 $ strace nc -l localhost 80
 execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0
 brk(0)                                  = 0x1e7a000
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000
 access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
 open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
 stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory)
 [...]
 open("libglib-2.0.so.0", O_RDONLY)      = -1 ENOENT (No such file or directory)
 open("/etc/ld.so.cache", O_RDONLY)      = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0
 mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000
 mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0
 mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000
 mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000
 close(3)                                = 0
 open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
[...]
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0
 mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000
 mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0
 mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000
 close(3)                                = 0
 mprotect(0x7f751babb000, 4096, PROT_READ) = 0
 munmap(0x7f751c994000, 179820)          = 0
 open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 3
 fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)
 fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 open("/etc/gai.conf", O_RDONLY)         = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "# Configuration for getaddrinfo("..., 4096) = 3343
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
 getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.

For each line:

open("/etc/gai.conf", O_RDONLY)         = 3

Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf and the constant O_RDONLY , and the return value was 3 .

How to make sense of this?

Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.

In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":

 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

We might therefore want to understand what "bind" is and why it might be failing.

You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the manpages-dev package, and can be invoked by eg ​​ man 2 bind (I just used strace to determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.

Right there in my man 2 bind page it says:

ERRORS
EACCES The address is protected, and the user is not the superuser.

So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.

My Library Is Not Loading!

Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:

$ strace perl a.pl
execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0
brk(0)                                  = 0xa8f000
[...]fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
brk(0xad1000)                           = 0xad1000
read(3, "use blahlib;\n\n", 4096)       = 14
stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory)
stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device)
[...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000
close(5)                                = 0
mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0
brk(0xb55000)                           = 0xb55000
read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596
brk(0xb77000)                           = 0xb77000
read(4, "", 4096)                       = 0
close(4)                                = 0
read(3, "", 4096)                       = 0
close(3)                                = 0
exit_group(0)                           = ?

We observe that the file is found in what looks like an unusual place.

open("/space/myperllib/blahlib.pm", O_RDONLY) = 4

Inspecting the environment, we see that:

$ env | grep myperl
PERL5LIB=/space/myperllib

So the solution is to set the same env variable before running:

export PERL5LIB=/space/myperllib
Get to know the internals bit by bit

If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file /etc/gai.conf , which I'd never come across before writing this.

Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully) browse C code more and more understanding will grow.

Gotchas

If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.

If you're on solaris, this program doesn't exist – you need to use truss instead.

Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.

Other useful tidbits

You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with -p .

If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and -T ) can help significantly.

vasudevram February 11, 2018 at 5:29 pm

Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:

access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

vasudevram February 11, 2018 at 5:29 pm

I guess that could trigger the other errors.

Benji Wiebe February 11, 2018 at 7:30 pm

A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.

vasudevram February 11, 2018 at 8:24 pm

>A failed access or open system call is not usually an error in the context of launching a program.

Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.

>Generally it is merely checking if a config file exists.

The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.

[Jul 08, 2020] Exit Codes With Special Meanings

Jul 08, 2020 | www.tldp.org

Appendix E. Exit Codes With Special Meanings Table E-1. Reserved Exit Codes

Exit Code Number Meaning Example Comments
1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero" and other impermissible operations
2 Misuse of shell builtins (according to Bash documentation) empty_function() {} Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison ).
126 Command invoked cannot execute /dev/null Permission problem or command is not an executable
127 "command not found" illegal_command Possible problem with $PATH or a typo
128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see first footnote)
128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9)
130 Script terminated by Control-C Ctl-C Control-C is fatal error signal 2 , (130 = 128 + 2, see above)
255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255

According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings, and should therefore be avoided for user-specified exit parameters. Ending a script with exit 127 would certainly cause confusion when troubleshooting (is the error code a "command not found" or a user-defined one?). However, many scripts use an exit 1 as a general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not particularly useful in debugging.

There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more straightforward. [2] All user-defined exit codes in the accompanying examples to this document conform to this standard, except where overriding circumstances exist, as in Example 9-2 .

Issuing a $? from the command-line after a shell script exits gives results consistent with the table above only from the Bash or sh prompt. Running the C-shell or tcsh may give different values in some cases.
Notes
[1] Out of range exit values can result in unexpected exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit 3809 gives an exit code of 225 (3809 % 256 = 225).
[2] An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

[Jul 08, 2020] Exit Codes

From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
Jul 08, 2020 | zwischenzugs.com

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

or

grep not_there /dev/null
echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

[Jul 08, 2020] Returning Values from Bash Functions by Mitch Frazier

Sep 11, 2009 | www.linuxjournal.com

Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.

Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.

The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:

function myfunc()
{
    myresult='some value'
}

myfunc
echo $myresult

The code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.

A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:

function myfunc()
{
    local  myresult='some value'
    echo "$myresult"
}

result=$(myfunc)   # or result=`myfunc`
echo $result

Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.

The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    eval $__resultvar="'$myresult'"
}

myfunc result
echo $result

Since we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.

When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar ). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:

function myfunc()
{
    local  result=$1
    local  myresult='some value'
    eval $result="'$myresult'"
}

myfunc result
echo $result

The reason it doesn't work is because when eval does the second interpretation and evaluates result='some value' , result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.

For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    if [[ "$__resultvar" ]]; then
        eval $__resultvar="'$myresult'"
    else
        echo "$myresult"
    fi
}

myfunc result
echo $result
result2=$(myfunc)
echo $result2

Here, if no variable name is passed to the function, the value is output to the standard output.

Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend of Linux Journal since the early 2000s.


David Krmpotic6 years ago • edited ,

This is the best way: http://stackoverflow.com/a/... return by reference:

function pass_back_a_string() {
eval "$1='foo bar rab oof'"
}

return_var=''
pass_back_a_string return_var
echo $return_var

lxw David Krmpotic6 years ago ,

I agree. After reading this passage, the same idea with yours occurred to me.

phil • 6 years ago ,

Since this page is a top hit on google:

The only real issue I see with returning via echo is that forking the process means no longer allowing it access to set 'global' variables. They are still global in the sense that you can retrieve them and set them within the new forked process, but as soon as that process is done, you will not see any of those changes.

e.g.
#!/bin/bash

myGlobal="very global"

call1() {
myGlobal="not so global"
echo "${myGlobal}"
}

tmp=$(call1) # keep in mind '$()' starts a new process

echo "${tmp}" # prints "not so global"
echo "${myGlobal}" # prints "very global"

lxw • 6 years ago ,

Hello everyone,

In the 3rd method, I don't think the local variable __resultvar is necessary to use. Any problems with the following code?

function myfunc()
{
local myresult='some value'
eval "$1"="'$myresult'"
}

myfunc result
echo $result

code_monk6 years ago • edited ,

i would caution against returning integers with "return $int". My code was working fine until it came across a -2 (negative two), and treated it as if it were 254, which tells me that bash functions return 8-bit unsigned ints that are not protected from overflow

Emil Vikström code_monk5 years ago ,

A function behaves as any other Bash command, and indeed POSIX processes. That is, they can write to stdout, read from stdin and have a return code. The return code is, as you have already noticed, a value between 0 and 255. By convention 0 means success while any other return code means failure.

This is also why Bash "if" statements treat 0 as success and non+zero as failure (most other programming languages do the opposite).

[Jul 07, 2020] The Missing Readline Primer by Ian Miell

Highly recommended!
This is from the book Learn Bash the Hard Way, available for $6.99.
Jul 07, 2020 | zwischenzugs.com

The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes

Readline is one of those technologies that is so commonly used many users don't realise it's there.

I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.

Bash Without Readline

First you're going to see what bash looks like without readline.

In your 'normal' bash shell, hit the TAB key twice. You should see something like this:

    Display all 2335 possibilities? (y or n)

That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.

Hit n to get out of that autocomplete.

Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

Now type:

$ bash --noediting

The --noediting flag starts up bash without the readline library enabled.

If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

$ exit
Other Shortcuts

There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.

$ echo 'some command'

There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I use these two dozens of times a day.

Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.

The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r , and then type ec . You should see the echo command you just ran like this:

    (reverse-i-search)`ec': echo echo

Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts

If you type:

$ bind -p

You will see a list of bindings that readline is capable of. There's a lot of them!

Have a read through if you're interested, but don't worry about understanding them all yet.

If you type:

$ bind -p | grep C-a

you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.

As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \e refers to the Esc key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

    "\e?": possible-completions

in the bind -p output.

Readline and Terminal Options

If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

    "\C-r": reverse-search-history

You might also have seen that there is another binding that allows you to search forward through your history too:

    "\C-s": forward-search-history

What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

Watch out though! Hitting \C-s to search forward through the history might well not work for you.

Why is this, if the binding is there and readline is switched on?

It's because something picked up the \C-s before it got to the readline library: the terminal settings.

The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

If you type:

$ stty -e

you should get output similar to this:

speed 9600 baud; 47 rows; 202 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -oxtabs -onocr -onlret
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W

You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key that we previously represented with a \C .

If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.

If you are in the same situation, type:

$ stty stop undef

Now, if you re-run stty -e , the last two lines might look like this:

[...]
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

where the stop entry now has <undef> underneath it.

Strangely, for me C-r is also bound to 'reprint' above ( ^R ).

But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.

While we are looking at this table:

discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

it's worth noting a few other key bindings that are used regularly.

First, one you may well already be familiar with is \C-c , which interrupts a program, terminating it:

$ sleep 99
[[Hit \C-c]]
^C
$

Similarly, \C-z suspends a program, allowing you to 'foreground' it again and continue with the fg builtin.

$ sleep 10
[[ Hit \C-z]]
^Z
[1]+  Stopped                 sleep 10
$ fg
sleep 10

\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

Finally, \C-w deletes the word before the cursor

These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.

Daz April 29, 2019 at 11:15 pm

Hi Ian,

What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2

stty -e
stty: invalid argument '-e'
Try 'stty –help' for more information. Reply

Leon May 14, 2019 at 5:12 am

`stty -a` works for me (Ubuntu 14)

yachris May 16, 2019 at 4:40 pm

You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.

AriSweedler May 17, 2019 at 4:50 am

I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!

I do not know exactly when or how the inputrc is read.

Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .

Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .

[Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin by Valentin Bajrami

The first part is at Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more - Enable Sysadmin
Jul 02, 2020 | www.redhat.com
These tips and tricks will make your Linux command line experience easier and more efficient.

Image

Photo by Jonathan Meyer from Pexels

More Linux resources

This blog post is the second of two covering some practical tips and tricks to get the most out of the Bash shell. In part one , I covered history, last argument, working with files and directories, reading files, and Bash functions. In this segment, I cover shell variables, find, file descriptors, and remote operations.

Use shell variables

The Bash variables are set by the shell when invoked. Why would I use hostname when I can use $HOSTNAME, or why would I use whoami when I can use $USER? Bash variables are very fast and do not require external applications.

These are a few frequently-used variables:

$PATH
$HOME
$USER
$HOSTNAME
$PS1
..
$PS4

Use the echo command to expand variables. For example, the $PATH shell variable can be expanded by running:

$> echo $PATH

[ Download now: A sysadmin's guide to Bash scripting . ]

Use the find command

The find command is probably one of the most used tools within the Linux operating system. It is extremely useful in interactive shells. It is also used in scripts. With find I can list files older or newer than a specific date, delete them based on that date, change permissions of files or directories, and so on.

Let's get more familiar with this command.

To list files older than 30 days, I simply run:

$> find /tmp -type f -mtime +30

To delete files older than 30 days, run:

$> find /tmp -type f -mtime +30 -exec rm -rf {} \;

or

$> find /tmp -type f -mtime +30 -exec rm -rf {} +

While the above commands will delete files older than 30 days, as written, they fork the rm command each time they find a file. This search can be written more efficiently by using xargs :

$> find /tmp -name '*.tmp' -exec printf '%s\0' {} \; | xargs -0 rm

I can use find to list sha256sum files only by running:

$> find . -type f -exec sha256sum {} +

And now to search for and get rid of duplicate .jpg files:

$> find . -type f -name '*.jpg' -exec sha256sum {} + | sort -uk1,1
Reference file descriptors

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

Execute remote operations

I use Telnet, netcat, Nmap, and other tools to test whether a remote service is up and whether I can connect to it. These tools are handy, but they aren't installed by default on all systems.

Fortunately, there is a simple way to test a connection without using external tools. To see if a remote server is running a web, database, SSH, or any other service, run:

$> timeout 3 bash -c '</dev/tcp/remote_server/remote_port' || echo "Failed to connect"

For example, to see if serverA is running the MariaDB service:

$> timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed to connect"

If the connection fails, the Failed to connect message is displayed on your screen.

Assume serverA is behind a firewall/NAT. I want to see if the firewall is configured to allow a database connection to serverA , but I haven't installed a database server yet. To emulate a database port (or any other port), I can use the following:

[serverA ~]# nc -l 3306

On clientA , run:

[clientA ~]# timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed"

While I am discussing remote connections, what about running commands on a remote server over SSH? I can use the following command:

$> ssh remotehost <<EOF  # Press the Enter key here
> ls /etc
EOF

This command runs ls /etc on the remote host.

I can also execute a local script on the remote host without having to copy the script over to the remote server. One way is to enter:

$> ssh remote_host 'bash -s' < local_script

Another example is to pass environment variables locally to the remote server and terminate the session after execution.

$> exec ssh remote_host ARG1=FOO ARG2=BAR 'bash -s' <<'EOF'
> printf %s\\n "$ARG1" "$ARG2"
> EOF
Password:
FOO
BAR
Connection to remote_host closed.

There are many other complex actions I can perform on the remote host.

Wrap up

There is certainly more to Bash than I was able to cover in this two-part blog post. I am sharing what I know and what I deal with daily. The idea is to familiarize you with a few techniques that could make your work less error-prone and more fun.

[ Want to test your sysadmin skills? Take a skills assessment today. ] Valentin Bajrami

Valentin is a system engineer with more than six years of experience in networking, storage, high-performing clusters, and automation. He is involved in different open source projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators. More about me

[Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin

Notable quotes:
"... No such file or directory ..."
Jul 07, 2020 | www.redhat.com

Reference file descriptors

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Jul 06, 2020] Take Responsibility for Training

Jul 06, 2020 | zwischenzugs.com

There's a great quote from Andy Grove, founder of Intel about training:

  Training is the manager's job. Training is the highest leverage activity a manager can do to increase the output of an organization. If a manager spends 12 hours preparing training for 10 team members that increases their output by 1% on average, the result is 200 hours of increased output from the 10 employees (each works about 2000 hours a year). Don't leave training to outsiders, do it yourself.

And training isn't just about being in a room and explaining things to people – it's about getting in the field and showing people how to respond to problems, how to think about things, and where they need to go next. The point is: take ownership of it.

I personally trained people in things like Git and Docker and basic programming whenever I got the chance to. This can demystify these skills and empower your staff to go further. It also sends a message about what's important – if the boss spends time on triage, training and hiring, then they must be important.

[Jul 06, 2020] The Runbooks Project

Jul 06, 2020 | zwischenzugs.com

I already talked about this in the previous post , but every subsequent attempt I made to get a practice of writing runbooks going was hard going. No-one ever argues with the logic of efficiency and saved time, but when it comes to putting the barn up, pretty much everyone is too busy with something else to help.

Looking at the history of these kind of efforts , it seems that people need to be forced – against their own natures – into following these best practices that invest current effort for future operational benefit.

Examples from The Checklist Manifesto included:

In the case of my previous post, it was frustration for me at being on-call that led me to spend months writing up runbooks. The main motivation that kept me going was that it would be (as a minimal positive outcome) for my own benefit . This intrinsic motivation got the ball rolling, and the effort was then sustained and developed by the other three more process-oriented factors.

There's a commonly-seen pattern here:

If you crack how to do that reliably, then you're going to be pretty good at building businesses.

It Doesn't Always Help

That wasn't the only experience I had trying to spread what I thought was good practice. In other contexts, I learned, the application of these methods was unhelpful.

In my next job, I worked on a new and centralised fast-changing system in a large org, and tried to write helpful docs to avoid repeating solving the same issues over and over. Aside from the authority and 'critical mass' problems outlined above, I hit a further one: the system was changing too fast for the learnings to be that useful. Bugs were being fixed quickly (putting my docs out of date similarly quickly) and new functionality was being added, leading to substantial wasted effort and reduced benefit.

Discussing this with a friend, I was pointed at a framework that already existed called Cynefin that had already thought about classifying these differences of context, and what was an appropriate response to them. Through that lens, my mistake had been to try and impose what might be best practice in a 'Complicated'/'Clear' context to a context that was 'Chaotic'/'Complex'. 'Chaotic' situations are too novel or under-explored to be susceptible to standard processes. Fast action and equally fast evaluation of system response is required to build up practical experience and prepare the way for later stabilisation.

'Why Don't You Just Automate It?'

I get this a lot. It's an argument that gets my goat, for several reasons.

Runbooks are a useful first step to an automated solution

If a runbook is mature and covers its ground well, it serves as an almost perfect design document for any subsequent automation solution. So it's in itself a useful precursor to automation for any non-trivial problem.

Automation is difficult and expensive

It is never free. It requires maintenance. There are always corner cases that you may not have considered. It's much easier to write: 'go upstairs' than build a robot that climbs stairs .

Automation tends to be context-specific

If you have a wide-ranging set of contexts for your problem space, then a runbook provides the flexibility to applied in any of these contexts when paired with a human mind. For example: your shell script solution will need to reliably cater for all these contexts to be useful; not every org can use your Ansible recipe; not every network can access the internet.

Automation is not always practicable

In many situations, changing or releasing software to automate a solution is outside your control or influence .

A Public Runbooks Project

All my thoughts on this subject so far have been predicated on writing proprietary runbooks that are consumed and maintained within an organisation.

What I never considered was gaining the critical mass needed by open sourcing runbooks, and asking others to donate theirs so we can all benefit from each others' experiences.

So we at Container Solutions have decided to open source the runbooks we have built up that are generally applicable to the community. They are growing all the time, and we will continue to add to them.

Call for Runbooks

We can't do this alone, so are asking for your help!

However you want to help, you can either raise a PR or an issue , or contact me directly .

[Jul 06, 2020] Things I Learned Managing Site Reliability (2017) - Hacker News

Jul 06, 2020 | news.ycombinator.com
Things I Learned Managing Site Reliability (2017) ( zwischenzugs.com )
109 points by bshanks on Feb 26, 2018 | hide | past | favorite | 13 comments
foo101 on Feb 26, 2018 [–]
I really like the point about runbooks/playbooks.

> We ended up embedding these dashboard within Confluence runbooks/playbooks followed by diagnosing/triaging, resolving, and escalation information. We also ended up associating these runbooks/playbooks with the alerts and had the links outputted into the operational chat along with the alert in question so people could easily follow it back.

When I used to work for Amazon, as a developer, I was required to write a playbook for every microservice I developed. The playbook had to be so detailed that, in theory, any site reliability engineer, who has no knowledge of the service should be able to read the playbook and perform the following activities:

- Understand what the service does.

- Learn all the curl commands to run to test each service component in isolation and see which ones are not behaving as expected.

- Learn how to connect to the actual physical/virtual/cloud systems that keep the service running.

- Learn which log files to check for evidence of problems.

- Learn which configuration files to edit.

- Learn how to restart the service.

- Learn how to rollback the service to an earlier known good version.

- Learn resolution to common issues seen earlier.

- Perform a checklist of activities to be performed to ensure all components are in good health.

- Find out which development team of ours to page if the issue remains unresolved.

It took a lot of documentation and excellent organization of such documentation to keep the services up and running.

twic on Feb 26, 2018 [–]
A far-out old employer of mine decided that their standard format for alerts, sent by applications to the central monitoring system, would include a field for a URL pointing to some relevant documentation.

I think this was mostly pushed through by sysadmins annoyed at getting alerts from new applications that didn't mean anything to them.

peterwwillis on Feb 26, 2018 [–]
When you get an alert, you have to first understand the alert, and then you have to figure out what to do about it. The majority of alerts, when people don't craft them according to a standard/policy, look like this:
  Subject: Disk usage high
  Priority: High
  Message: 
    There is a problem in cluster ABC.
    Disk utilization above 90%.
    Host 1.2.3.4.
It's a pain in the ass to go figure out what is actually affected, why it's happening, and track down some kind of runbook that describes how to fix this specific case (because it may vary from customer to customer, not to mention project to project). This is usually the state of alerts until a single person (who isn't a manager; managers hate cleaning up inefficiencies) gets so sick and tired of it that they take the weekend to overhaul one alert at a time to provide better insight as to what is going on and how to fix it. Any attempt to improve docs for those alerts are never updated by anyone but this lone individual.

Providing a link to a runbook makes resolving issues a lot faster. It's even better if the link is to a Wiki page, so you can edit it if the runbook isn't up to date.

adrianratnapala on Feb 26, 2018 [–]
So did the system work, and how did it work?

Basically you are saying you were required to be really diligent about the playbooks and put effort in to get them right.

Did people really put that effort in? Was it worth it? If so, what elments of the culture/organisation/process made people do the right thing when it is so much easier for busy people to get sloppy?

foo101 on Feb 27, 2018 [–]
The answer is "Yes" to all of your questions.

Regarding the question about culture, yes, busy people often get sloppy. But when a P1 alert comes because a site reliability engineer could not resolve the issue by following the playbook, it looks bad on the team and a lot of questions are asked by all affected stakeholders (when a service goes down in Amazon it may affect multiple other teams) about why the playbook was deficient. Nobody wants to be in a situation like this. In fact, no developer wants to be woken up at 2 a.m. because a service went down and the issue could not be fixed by the on-call SRE. So it is in their interest to write good and detailed playbooks.

zwischenzug on Feb 26, 2018 [–]
That sounds like a great process there. It staggers me how much people a) underestimate the investment required to maintain that kind of documentation, and b) underestimate how much value it brings. It's like brushing your teeth.
peterwwillis on Feb 26, 2018 [–]
> It's far more important to have a ticketing system that functions reliably and supports your processes than the other way round.

The most efficient ticketing systems I have ever seen were heavily customized in-house. When they moved to a completely different product, productivity in addressing tickets plummeted. They stopped generating tickets to deal with it.

> After process, documentation is the most important thing, and the two are intimately related.

If you have two people who are constantly on call to address issues because nobody else knows how to deal with it, you are a victim of a lack of documentation. Even a monkey can repair a space shuttle if they have a good manual.

I partly rely on incident reports and issues as part of my documentation. Sometimes you will get an issue like "disk filling up", and maybe someone will troubleshoot it and resolve it with a summary comment of "cleaned up free space in X process". Instead of making that the end of it, create a new issue which describes the problem and steps to resolve in detail. Update the issue over time as necessary. Add a tag to the issue called 'runbook'. Then mark related issues as duplicates of this one issue. It's kind of horrible, but it seamlessly integrates runbooks with your issue tracking.

mdaniel on Feb 26, 2018 [–]
Even a monkey can repair a space shuttle if they have a good manual

I would like to point out that the dependency chain for repairing the space shuttle (or worse: microservices) can turn the need for understanding (or authoring) one document into understanding 12+ documents, or run the risk of making a document into a "wall of text," copy-paste hell, and/or out-of-date.

Capturing the contextual knowledge required to make an administration task straight-forward can easily turn the forest into the trees.

I would almost rather automate the troubleshooting steps than to have to write sufficiently specific English to express what one should do in given situations, with the caveat that such automation takes longer to write than said automation.

zwischenzug on Feb 26, 2018 [–]
Yeah, that's exactly what we found - we created a JIRA project called 'DOCS', which made search trivial:

'docs disk filling up'

tabtab on Feb 26, 2018 [–]
It's pretty much organizing 101: study situation, plan, track well, document well but in a practical sense (write docs that people will actually read), get feedback from everybody, learn from your mistakes, admit your mistakes, and make the system and process better going forward.
willejs on Feb 26, 2018 [–]
This was posted a while back, and you can see the original thread with more comments here https://news.ycombinator.com/item?id=14031180
stareatgoats on Feb 26, 2018 [–]
I may be out of touch with current affairs, but I don't think I've encountered a single workplace where documentation has worked. Sometimes because because people were only hired to put out fires, sometimes because there was no sufficiently customized ticketing system, sometimes because they simply didn't know how to abstract their tasks into well written documents.

And in many cases because people thought they might be out of a job if they put their solutions in print. I'm guessing managers still need to counter those tendencies actively if they want documentation to happen. Plenty of good pointers in this article, I found.

commandlinefan on Feb 26, 2018 [–]
> I was trusted (culture, again!)

This is the kicker - and the rarity. I don't think it's all trust, though. When your boss already knows going into Q1 that he's going to be fired in Q2 if he doesn't get 10 specific (and myopically short-term) agenda items addressed, it doesn't matter how much he trusts you, you're going to be focusing on only the things that have the appearance of ROI after a few hours of work, no matter how inefficient they are in the long term.


[Jul 06, 2020] BASH Shell Redirect stderr To stdout ( redirect stderr to a File ) by Vivek Gite

Jun 06, 2020 | www.cyberciti.biz

... ... ...

Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

For example, use the grep command for recursive search in the $HOME directory and redirect all errors (stderr) to a file name grep-errors.txt as follows:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
$ cat /tmp/grep-errors.txt

Sample outputs:

grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address
grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory
grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory
grep: /home/vivek/.byobu/.ssh-agent: No such device or address
Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
We can als use the following syntax:
$ command > file-name 2>&1
We can write both stderr and stdout to two different files too. Let us try out our previous grep command example:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
$ cat /tmp/grep-outputs.txt

Redirecting stderr to stdout to a file or another command

Here is another useful example where both stderr and stdout sent to the more command instead of a file:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1
$ command-name > file.txt 2>&1
## bash only ##
$ command2 &> filename
$ sudo find / -type f -iname ".env" &> /tmp/search.txt

Redirection takes from left to right. Hence, order matters. For example:
command-name 2>&1 > file.txt ## wrong ##
command-name > file.txt 2>&1 ## correct ##

How to redirect stderr to stdout in Bash script

A sample shell script used to update VM when created in the AWS/Linode server:

#!/usr/bin/env bash
# Author - nixCraft under GPL v2.x+
# Debian/Ubuntu Linux script for EC2 automation on first boot
# ------------------------------------------------------------
# My log file - Save stdout to $LOGFILE
LOGFILE="/root/logs.txt"
 
# My error file - Save stderr to $ERRFILE
ERRFILE="/root/errors.txt"
 
# Start it 
printf "Starting update process ... \n" 1>"${LOGFILE}"
 
# All errors should go to error file 
apt-get -y update 2>"${ERRFILE}"
apt-get -y upgrade 2>>"${ERRFILE}"
printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}"
shutdown -r now 2>>"${ERRFILE}"

Our last example uses the exec command and FDs along with trap and custom bash functions:

#!/bin/bash
# Send both stdout/stderr to a /root/aws-ec2-debian.log file
# Works with Ubuntu Linux too.
# Use exec for FD and trap it using the trap
# See bash man page for more info
# Author:  nixCraft under GPL v2.x+
# ---------------------------------------------
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/root/aws-ec2-debian.log 2>&1
 
# log message
log(){
        local m="$@"
        echo ""
        echo "*** ${m} ***"
        echo ""
}
 
log "$(date) @ $(hostname)"
## Install stuff ##
log "Updating up all packages"
export DEBIAN_FRONTEND=noninteractive
apt-get -y clean
apt-get -y update
apt-get -y upgrade
apt-get -y --purge autoremove
 
## Update sshd config ##
log "Configuring sshd_config"
sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g'  /etc/ssh/sshd_config
 
## Hide process from other users ##
log "Update /proc/fstab to hide process from each other"
echo 'proc    /proc    proc    defaults,nosuid,nodev,noexec,relatime,hidepid=2     0     0' >> /etc/fstab
 
## Install LXD and stuff ##
log "Installing LXD/wireguard/vnstat and other packages on this box"
apt-get -y install lxd wireguard vnstat expect mariadb-server 
 
log "Configuring mysql with mysql_secure_installation"
SECURE_MYSQL_EXEC=$(expect -c "
set timeout 10
spawn mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"$MYSQL\r\"
expect \"Change the root password?\"
send \"n\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"y\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof
")
 
# log to file #
echo "   $SECURE_MYSQL_EXEC   "
# We no longer need expect 
apt-get -y remove expect
 
# Reboot the EC2 VM
log "END: Rebooting requested @ $(date) by $(hostname)"
reboot
WANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?

Try the tee command as follows:
command1 2>&1 | tee filename
Here is how to use it insider shell script too:

#!/usr/bin/env bash
{
   command1
   command2 | do_something
} 2>&1 | tee /tmp/outputs.log
Conclusion

In this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr. We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash man page here :

Operator Description Examples
command>filename Redirect stdout to file "filename." date > output.txt
command>>filename Redirect and append stdout to file "filename." ls -l >> dirs.txt
command 2>filename Redirect stderr to file "filename." du -ch /snaps/ 2> space.txt
command 2>>filename Redirect and append stderr to file "filename." awk '{ print $4}' input.txt 2>> data.txt
command &>filename
command >filename 2>&1
Redirect both stdout and stderr to file "filename." grep -R foo /etc/ &>out.txt
command &>>filename
command >>filename 2>&1
Redirect both stdout and stderr append to file "filename." whois domain &>>log.txt

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . RELATED TUTORIALS

  1. Matt Kukowski says: January 29, 2014 at 6:33 pm

    In pre-bash4 days you HAD to do it this way:

    cat file > file.txt 2>&1

    now with bash 4 and greater versions you can still do it the old way but

    cat file &> file.txt

    The above is bash4+ some OLD distros may use prebash4 but I think they are alllong gone by now. Just something to keep in mind.

  2. iamfrankenstein says: June 12, 2014 at 8:35 pm

    I really love: " command2>&1 | tee logfile.txt "

    because tee log's everything and prints to stdout . So you stil get to see everything! You can even combine sudo to downgrade to a log user account and add date's subject and store it in a default log directory :)

[Jul 05, 2020] Learn Bash the Hard Way by Ian Miell [Leanpub PDF-iPad-Kindle]

Highly recommended!
Jul 05, 2020 | leanpub.com


skeptic
5.0 out of 5 stars Reviewed in the United States on July 2, 2020

A short (160 pages) book that covers some difficult aspects of bash needed to customize bash env.

Whether we want it or not, bash is the shell you face in Linux, and unfortunately, it is often misunderstood and misused. Issues related to creating your bash environment are not well addressed in existing books. This book fills the gap.

Few authors understand that bash is a complex, non-orthogonal language operating in a complex Linux environment. To make things worse, bash is an evolution of Unix shell and is a rather old language with warts and all. Using it properly as a programming language requires a serious study, not just an introduction to the basic concepts. Even issues related to customization of dotfiles are far from trivial, and you need to know quite a bit to do it properly.

At the same time, proper customization of bash environment does increase your productivity (or at least lessens the frustration of using Linux on the command line ;-)

The author covered the most important concepts related to this task, such as bash history, functions, variables, environment inheritance, etc. It is really sad to watch like the majorly of Linux users do not use these opportunities and forever remain on the "level zero" using default dotfiles with bare minimum customization.

This book contains some valuable tips even for a seasoned sysadmin (for example, the use of !& in pipes), and as such, is worth at least double of suggested price. It allows you intelligently customize your bash environment after reading just 160 pages and doing the suggested exercises.

Contents:

[Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell

Highly recommended!
Notable quotes:
"... Material here based on material from my book Learn Bash the Hard Way . Free preview available here . ..."
"... natively in bash ..."
Jul 04, 2020 | zwischenzugs.com

Here are some tips that might help you be more productive with bash.

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you'll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

... ... ...


Material here based on material from my book
Learn Bash the Hard Way .
Free preview available here .


3) shopt vs set

This one bothered me for a while.

What's the difference between set and shopt ?

set s we saw before , but shopt s look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.

4) Here Docs and Here Strings

'Here docs' are files created inline in the shell.

The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc

Notice that:

Lesser known is the 'here string':

$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash .

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .

If you want to use glob patterns that are greedy (see globbing here ) then you double up:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}
$ echo ${VAR%%*FOOTER}
6) ​Variable Defaults

These are very handy when you're knocking up scripts quickly.

If you have a variable that's not set, you can 'default' them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second .

Observer how the third argument's default has been assigned, but not the first two.

You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap built-in can be used to 'catch' when a signal is sent to your script.

Here's an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C , CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can't be trapped in this way

But mostly I've used this for 'cleanups' like the above, which serve their purpose.

8) Shell Variables

It's well worth getting to know the standard shell variables available to you . Here are some of my favourites:

RANDOM

Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE
REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}
LINENO and SECONDS

Handy for debugging

$ echo ${LINENO}
115
$ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two 'lines' above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

... ... ...

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).

What I didn't know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

... ... ...

[Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell

Highly recommended!
Notable quotes:
"... The "last argument" one: !$ ..."
"... The " n th argument" one: !:2 ..."
"... The "all the arguments": !* ..."
"... The "last but n " : !-2:$ ..."
"... The "get me the folder" one: !$:h ..."
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
Oct 02, 2019 | opensource.com

7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now

Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.

So most of them were never committed to memory.

More on Bash This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).

When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

1. The "last argument" one: !$

If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.

Consider this scenario:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory

Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:

$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place

and the command will work.

There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.

2. The " n th argument" one: !:2

Ever done anything like this?

$ tar -cvf afolder afolder.tar
tar: failed to open

Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.

tar_2x.png

When you mix up arguments like that, you can run:

$ ! : 0 ! : 1 ! : 3 ! : 2
tar -cvf afolder.tar afolder

and your reputation will be saved.

The last command's items are zero-indexed and can be substituted in with the number after the !: .

Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.

3. The "all the arguments": !*

Imagine I run a command like:

$ grep '(ping|pong)' afile

The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .

I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).

$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping

You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).

4. The "last but n " : !-2:$

The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.

For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile

I can no longer use the !$ shortcut.

In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place

Again, once you learn it, you may be surprised at how often you need it.

5. The "get me the folder" one: !$:h

This one looks less promising on the face of it, but I use it dozens of times daily.

Imagine I run a command like this:

$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.

The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.

I can do this at a stroke with:

$ cd ! $:h
cd / etc

This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."

6. The "the current line" one: !#:1

For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:

$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak

but once under the fingers, it can be a very quick alternative to

7. The "search and replace" one: !!:gs

This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.

Say I want to tell the world that my s key does not work and outputs f instead:

$ echo my f key doef not work
my f key doef not work

Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:

$ !! :gs / f / s /
echo my s key does not work
my s key does not work

It doesn't work only on single characters; I can replace words or sentences, too:

$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out

Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

$ ping ! #:0:gs/i/o
$ vi / tmp /! : 0 .txt
$ ls ! $:h
$ cd ! - 2 :h
$ touch ! $! - 3 :$ !! ! $.txt
$ cat ! : 1 -$ Conclusion

Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.

If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .


This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.

Orr, August 25, 2019 at 10:39 pm

BTW – you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!

Dima August 26, 2019 at 7:40 am

Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.

cbarrick on 03 Oct 2019

Your "current line" example is too contrived. Your example is copying to a backup like this:

$ cp /path/to/some/file !#:1.bak

But a better way to write that is with filename generation:

$ cp /path/to/some/file{,.bak}

That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.

Darryl Martin August 26, 2019 at 4:41 pm

I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.

A couple additions I'm sure you know.

  1. I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
  2. I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.

Ricardo J. Barberis on 06 Oct 2019

The problem with bash's history shorcuts for me is... that I never had the need to learn them.

Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).

Examples:

Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.

YMMV of course.

[Jul 02, 2020] Some Relatively Obscure Bash Tips zwischenzugs

Jul 02, 2020 | zwischenzugs.com

2) |&

You may already be familiar with 2>&1 , which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:

if doesnotexist |& grep 'command not found' >/dev/null
then
  echo oops
fi
3) $''

This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will grep through files looking for UK currency ('£') signs in hexadecimal recursively:

grep -r $'\xc2\xa3' *

You can also use octal:

grep -r $'\302\243' *
4) HISTIGNORE

If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.

This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:

HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"

You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.

5) fc

If readline key bindings aren't under your fingers, then this one may come in handy.

It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.

6) ((i++))

If you can't be bothered with faffing around with variables in bash with the $[] construct, you can use the C-style compound command.

So, instead of:

A=1
A=$[$A+1]
echo $A

you can do:

A=1
((A++))
echo $A

which, especially with more complex calculations, might be easier on the eye.

7) caller

Another builtin bash command, caller gives context about the context of your shell's

SHLVL is a related shell variable which gives the level of depth of the calling stack.

This can be used to create stack traces for more complex bash scripts.

Here's a die function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:

#!/bin/bash
die() {
  local frame=0
  ((FRAMELEVEL=SHLVL - frame))
  echo -n "${FRAMELEVEL}: "
  while caller $frame; do
    ((frame++));
    ((FRAMELEVEL=SHLVL - frame))
    if [[ ${FRAMELEVEL} -gt -1 ]]
    then
      echo -n "${FRAMELEVEL}: "
    fi
  done
  echo "$*"
  exit 1
}

which outputs:

3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port

This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).

Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.

This snippet, for example, makes a web request to a site and returns the output.

exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80
echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9
cat <&9

The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port 80 for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.

Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.

9) Co-processes

Since version 4 of bash it has offered the capability to run named coprocesses.

It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:

coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)

This sets up the coprocess as a subshell with the name testproc .

Within the subshell, there's a never-ending while loop that counts its own iterations with the i variable. It outputs two lines: the iteration number, and a line read in from standard input.

After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:

echo "${testproc[@]}"

in my terminal outputs:

63 60

Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:

echo "${testproc_PID}"

You can now input data to the standard input of this coprocess at will like this:

echo input1 >&"${testproc[1]}"

In this case, the command resolves to: echo input1 >&60 , and the >&[INTEGER] construct ensures the redirection goes to the coprocess's standard input.

Now you can read the output of the coprocess's two lines in a similar way, like this:

read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"

You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.

Here's a complete listing for those who want to cut and paste:

!/bin/bash
coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)
echo "${testproc[@]}"
echo "${testproc_PID}"
echo input1 >&"${testproc[1]}"
read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"
echo "${output1a}"
echo "${output1b}"
echo input2 >&"${testproc[1]}"
read -r output2a <&"${testproc[0]}"
read -r output2b <&"${testproc[0]}"
echo "${output2a}"
echo "${output2b}"

[Jul 02, 2020] Associative arrays in Bash by Seth Kenlon

Apr 02, 2020 | opensource.com

Originally from: Get started with Bash scripting for sysadmins - Opensource.com

Most shells offer the ability to create, manipulate, and query indexed arrays. In plain English, an indexed array is a list of things prefixed with a number. This list of things, along with their assigned number, is conveniently wrapped up in a single variable, which makes it easy to "carry" it around in your code.

Bash, however, includes the ability to create associative arrays and treats these arrays the same as any other array. An associative array lets you create lists of key and value pairs, instead of just numbered values.

The nice thing about associative arrays is that keys can be arbitrary:

$ declare -A userdata
$ userdata [ name ] =seth
$ userdata [ pass ] =8eab07eb620533b083f241ec4e6b9724
$ userdata [ login ] = ` date --utc + % s `

Query any key:

$ echo " ${userdata[name]} "
seth
$ echo " ${userdata[login]} "
1583362192

Most of the usual array operations you'd expect from an array are available.

Resources

[Jul 02, 2020] DevOps is a Myth Effective Software Delivery Enablement

Jul 02, 2020 | otomato.link

DevOps is a Myth

Tags : Agile Books DevOps IT management software delivery

Category : Tools (Practitioner's Reflections on The DevOps Handbook) The Holy Wars of DevOps

Yet another argument explodes online around the 'true nature of DevOps', around 'what DevOps really means' or around 'what DevOps is not'. At each conference I attend we talk about DevOps culture , DevOps mindset and DevOps ways . All confirming one single truth – DevOps is a myth . /img/sapiens.jpg

Now don't get me wrong – in no way is this a negation of its validity or importance. As Y.N.Harrari shows so eloquently in his book 'Sapiens' – myths were the forming power in the development of humankind. It is in fact our ability to collectively believe in these non-objective, imagined realities that allows us to collaborate at large scale, to coordinate our actions, to build pyramids, temples, cities and roads.

There's a Handbook!

I am writing this while finishing the exceptionally well written "DevOps Handbook" . If you really want to know what stands behind the all-too-often misinterpreted buzzword – you better read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in DevOps. And it comes from the folks who invented the term and have been busy developing its main concepts over the last 7 years.


Now notice – I'm only saying you should read the "DevOps Handbook" if you want to understand what DevOps is about. After finishing it I'm pretty sure you won't have any interest in participating in petty arguments along the lines of 'is DevOps about automation or not?'. But I'm not saying you should read the handbook if you want to know how to improve and speed up your software manufacturing and delivery processes. And neither if you want to optimize your IT organization for innovation and continuous improvement.

Because the main realization that you, as a smart reader, will arrive at – is just that there is no such thing as DevOps. DevOps is a myth .

So What's The Story?

It all basically comes down to this: some IT companies achieve better results than others . Better revenues, higher customer and employee satisfaction, faster value delivery, higher quality. There's no one-size-fits-all formula, there is no magic bullet – but we can learn from these high performers and try to apply certain tools and practices in order to improve the way we work and achieve similar or better results. These tools and processes come from a myriad of management theories and practices. Moreover – they are constantly evolving, so we need to always be learning. But at least we have the promise of better life. That is if we get it all right: the people, the architecture, the processes, the mindset, the org structure, etc.

So it's not about certain tools, cause the tools will change. And it's not about certain practices – because we're creative and frameworks come and go. I don't see too many folks using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts today) And then the speakers at the next fancy conference will tell you it's mainly about culture. And you know what culture is? It's just a story, or rather a collection of stories that a group of people share. Stories that tell us something about the world and about ourselves. Stories that have only a very relative connection to the material world. Stories that can easily be proven as myths by another group of folks who believe them to be wrong.

But Isn't It True?

Anybody who's studied management theories knows how the approaches have changed since the beginning of the last century. From Taylor's scientific management and down to McGregor's X&Y theory they've all had their followers. Managers who've applied them and swore getting great results thanks to them. And yet most of these theories have been proven wrong by their successors.

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn't deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

You may say that the world is changing fast – that's why we now need new approaches! And I agree – the technology, the globalization, the flow of information – they all change the stories we live in. But this also means that whatever is working for someone else today won't probably work for you tomorrow – because the world will change yet again.

Which means that the DevOps Handbook – while a great overview and historical document and a source of inspiration – should not be taken as a guide to action. It's just another step towards establishing the DevOps myth.

And that takes us back to where we started – myths and stories aren't bad in themselves. They help us collaborate by providing a common semantic system and shared goals. But they only work while we believe in them and until a new myth comes around – one powerful enough to grab our attention.

Your Own DevOps Story

So if we agree that DevOps is just another myth, what are we left with? What do we at Otomato and other DevOps consultants and vendors have to sell? Well, it's the same thing we've been building even before the DevOps buzz: effective software delivery and IT management. Based on tools and processes, automation and effective communication. Relying on common sense and on being experts in whatever myth is currently believed to be true.

As I keep saying – culture is a story you tell. And we make sure to be experts in both the storytelling and the actual tooling and architecture. If you're currently looking at creating a DevOps transformation or simply want to optimize your software delivery – give us a call. We'll help to build your authentic DevOps story, to train your staff and to architect your pipeline based on practice, skills and your organization's actual needs. Not based on myths that other people tell.

[Jul 02, 2020] Import functions and variables into Bash with the source command by Seth Kenlon

Jun 12, 2020 | opensource.com
Source is like a Python import or a Java include. Learn it to expand your Bash prowess. Seth Kenlon (Red Hat) Feed 25 up 2 comments Image by : Opensource.com x Subscribe now

When you log into a Linux shell, you inherit a specific working environment. An environment , in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a command not found error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.

There are many ways to affect your shell environment. You can make modifications in configuration files, such as ~/.bashrc and ~/.profile , you can run services at startup, and you can create your own custom commands or script your own Bash functions .

Add to your environment with source

Bash (along with some other shells) has a built-in command called source . And here's where it can get confusing: source performs the same function as the command . (yes, that's but a single dot), and it's not the same source as the Tcl command (which may come up on your screen if you type man source ). The built-in source command isn't in your PATH at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can type help source .

The . command is POSIX -compliant. The source command is not defined by POSIX but is interchangeable with the . command.

More on Bash According to Bash help , the source command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute with source happens within and affects your current environment.

Before exploring how source can affect your environment, try source on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file called hello.sh :

#!/usr/bin/env bash
echo "hello world"

Using source , you can run this script even without setting the executable bit:

$ source hello.sh
hello world

You can also use the built-in . command for the same results:

$ . hello.sh
hello world

The source and . commands successfully execute the contents of the test file.

Set variables and import functions

You can use source to "import" a file into your shell environment, just as you might use the include keyword in C or C++ to reference a library or the import keyword in Python to bring in a module. This is one of the most common uses for source , and it's a common default inclusion in .bashrc files to source a file called .bash_aliases so that any custom aliases you define get imported into your environment when you log in.

Here's an example of importing a Bash function. First, create a function in a file called myfunctions . This prints your public IP address and your local IP address:

function myip () {
curl http: // icanhazip.com

ip addr | grep inet $IP | \
cut -d "/" -f 1 | \
grep -v 127 \.0 | \
grep -v \:\: 1 | \
awk '{$1=$1};1'
}

Import the function into your shell:

$ source myfunctions

Test your new function:

$ myip
93.184.216.34
inet 192.168.0.23
inet6 fbd4:e85f:49c: 2121 :ce12:ef79:0e77:59d1
inet 10.8.42.38 Search for source

When you use source in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.

If Bash can't find the file to execute, it searches your PATH instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.

These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of /usr/include , so no matter where you are when you source them, Bash finds them.

For instance, you could create a directory called ~/.local/include as a storage area for common functions and then put this block of code into your .bashrc file:

for i in $HOME / .local / include /* ; do 
   source $i
done

This "imports" any file containing custom functions in ~/.local/include into your shell environment.

Bash is the only shell that searches both the current directory and your PATH when you use either the source or the . command.

Using source for open source

Using source or . to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your .bashrc file, consider placing related functions or groups of aliases into dedicated files, and then use source to ingest them.

Get started with Bash scripting for sysadmins Learn the commands and features that make Bash one of the most powerful shells available.

Seth Kenlon (Red Hat) Introduction to automation with Bash scripts In the first article in this four-part series, learn how to create a simple shell script and why they are the best way to automate tasks.

David Both (Correspondent) Bash cheat sheet: Key combos and special syntax Download our new cheat sheet for Bash commands and shortcuts you need to talk to your computer.

[Jul 01, 2020] Use curl to test an application's endpoint or connectivity to an upstream service endpoint

Notable quotes:
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
Jul 01, 2020 | opensource.com

curl

curl transfers a URL. Use this command to test an application's endpoint or connectivity to an upstream service endpoint. c url can be useful for determining if your application can reach another service, such as a database, or checking if your service is healthy.

As an example, imagine your application throws an HTTP 500 error indicating it can't reach a MongoDB database:

$ curl -I -s myapplication: 5000
HTTP / 1.0 500 INTERNAL SERVER ERROR

The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop:

$ curl -I -s database: 27017
HTTP / 1.0 200 OK

So what could be the problem? Check if your application can get to other places besides the database from the application host:

$ curl -I -s https: // opensource.com
HTTP / 1.1 200 OK

That seems to be okay. Now try to reach the database from the application host. Your application is using the database's hostname, so try that first:

$ curl database: 27017
curl: ( 6 ) Couldn 't resolve host ' database '

This indicates that your application cannot resolve the database because the URL of the database is unavailable or the host (container or VM) does not have a nameserver it can use to resolve the hostname.

[Jul 01, 2020] Stupid Bash tricks- History, reusing arguments, files and directories, fu