Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Linux command line helpers
Softpanorama sysadmin utilities Saferm -- wrapper for rm command Neatbash -- a simple bash prettyprinter Neatperl -- a simple Perl prettyprinter Pythonizer: Translator from Perl to Python Top Vulnerabilities in Linux Environment Registering a server using Red Hat Subscription Manager (RHSM)
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Perl Wiki as a System Administrator Tool Frontpage as a poor man personal knowledge management system Information is not knowledge Static Web site content generators
The tar pit of Red Hat overcomplexity Notes on RHCSA Certification for RHEL 7 Red Hat Enterprise Linux Life Cycle Recovery of LVM partitions Notes on hard drives partitioning for Linux Troubleshooting HPOM agents Root filesystem is mounted read only on boot
Sysadmin cheatsheets Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoers File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Unix System Monitoring Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM Resetting frozen iDRAC without unplugging the server ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bozos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workagolism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment, some new "for profit" technocults  like DevOps, and outsourcing.  Large swats of Linux knowledge (and many excellent  books)  were  made obsolete by Red Hat with the introduction of systemd. Especially affected are older, most experienced members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide.

But the conditions are getting worse and worse. That's why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1. Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Excessive automation can be a problem. It increases the number of layers between fundamental process and sysadmin. and thus it makes troubleshooting much harder. Moreover often it does not produce tangible benefits in comparison with simpler tools while dramatically increasing the level of complexity of environment.  See Unix Configuration Management Tools for deeper discussion of this issue.
  3. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of "efficiency" which in reality is more connected with the size of top brass bonuses than anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case they are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. Often the only job they can find is job  in Home Depot, or similar retail outlets.  See Over 50 and unemployed
  4. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). "Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface shine is more important than the substance. The marketing brochures and manuals are no different from mainstream news media stories in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  5. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.  See Office Space (1999) - IMDb  for humorous take on IT culture.  That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has  tremendously more power in the decision-making process and eats up more of the corporate budget, while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks. Purchasing of "non-standard" software or hardware is often so complicated that it never tried even if benefits are tangible.
  6. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.  They are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink, and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand beach, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version or organizational change wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to switch to a different flavor of Linux,  or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  For a talented and not too old person staying employed in sysadmin profession is probably a mistake, or at least a very questionable decision.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Of course summer "dead season" at least partially exists, but it is rather short. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them is of course is the mark of any good sysadmin, but should not be the only source of new knowledge. Communication with colleagues who have high level of knowledge in selected areas is as important or even more important. But this is very difficult as often sysadmin works in isolation.  Professional groups like Linux user group exist mostly in metropolitan areas of large cities. Coronavirus made those groups even more problematic.

Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (AWS, Azure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course.  sysadmins are not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course, much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM.

Nobody is now surprised to see a server with 128GB of RAM, laptop with 16Gb of RAM, or cellphones with  4GB of RAM and 2GHZ CPU (Please note that IBM Pc stated with 1 MB of RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Hardware evolution while painful is inevitable and it changes the software landscape. Thanks God hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes) and progress now is mostly measured by the number of cores packed in the same die.

The there is other set of significant changes which is course not by progress of hardware (or software) but mainly by fashion and the desire of certain (and powerful) large corporations to entrench their market position. Such changes are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow. For example how long DevOps will remain in fashion.

Typically such techno-fashion lasts around a decade. After that it typically fades in oblivion,  or even is debunked, and former idols shattered (verification crazy is a nice example here). Fro example this strange re-invention of the ideas of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for women fashion.  Moreover sometimes it looks to me that the movie The Devil Wears Prada is a subtle parable on sysadmin work.

Add to this horrible job market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or often such a job position is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to fit this specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also often position are available mostly in place like New York of San Francisco, were both rent and property prices are high and growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done to enhance Red Hat marketing position, in the interests of the Red Hat and IBM brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL administration other than for a solid advance.  Of course, systemd  generated some backlash, but the position  of Red Hat as Microsoft of Linux allows them to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous Windows interface ecosystem and putting keyboard users into some disadvantage  (while preserving binary compatibility). Red Hat essentially did the same for server sysadmins.

Dr. Nikolai Bezroukov

P.S. See also

P.P.S. Here are my notes/reflection of sysadmin problems that often arise in rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

Highly relevant job about life of a sysadmin: "I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

If you are frustrated read Admin Humor

[Jul 21, 2021] Walmart Brings Automation To Regional Distribution Centers - ZeroHedge

Jul 18, 2021 | www.zerohedge.com

Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY, JUL 18, 2021 - 09:00 PM

The progressive press had a field day with "woke" Walmart highly publicized February decision to hikes wages for 425,000 workers to an average above $15 an hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum wage workers with machines - will get the same amount of airtime.

As Chain Store Age reports , Walmart is applying artificial intelligence to the palletizing of products in its regional distribution centers. I.e., it is replacing thousands of workers with robots.

Since 2017, the discount giant has worked with Symbotic to optimize an automated technology solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla., distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is either cross-docked or warehoused, while being moved or stored manually. When it's time for the product to go to a store, a 53-foot trailer is manually packed for transit. After the truck arrives at a store, associates unload it manually and place the items in the appropriate places.

Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like puzzle pieces using high-speed mobile robots that operate with a precision that speeds the intake process and increases the accuracy of freight being stored for future orders. By using dense modular storage, the solution also expands building capacity.

In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic solution creates custom store- and aisle-ready pallets.

Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit out-of-stocks and increasing the speed of stocking and unloading." More importantly, the company hopes to further cut expenses and remove even more unskilled labor from its supply chain.

This solution follows tests of similar automated warehouse solutions at a Walmart consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter, Calif.

Walmart plans to implement this technology in 25 of its 42 RDCs.

"Though very few Walmart customers will ever see into our warehouses, they'll still be able to witness an industry-leading change, each time they find a product on shelves," said Joe Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve all the complexities of a global supply chain, but we plan to keep changing the game as we use technology to transform the way we work and lead our business into the future."

[Jul 20, 2021] Walmart Brings Automation To Regional Distribution Centers - ZeroHedge

Jul 18, 2021 | www.zerohedge.com

Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY, JUL 18, 2021 - 09:00 PM

The progressive press had a field day with "woke" Walmart highly publicized February decision to hikes wages for 425,000 workers to an average above $15 an hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum wage workers with machines - will get the same amount of airtime.

As Chain Store Age reports , Walmart is applying artificial intelligence to the palletizing of products in its regional distribution centers. I.e., it is replacing thousands of workers with robots.

Since 2017, the discount giant has worked with Symbotic to optimize an automated technology solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla., distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is either cross-docked or warehoused, while being moved or stored manually. When it's time for the product to go to a store, a 53-foot trailer is manually packed for transit. After the truck arrives at a store, associates unload it manually and place the items in the appropriate places.

Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like puzzle pieces using high-speed mobile robots that operate with a precision that speeds the intake process and increases the accuracy of freight being stored for future orders. By using dense modular storage, the solution also expands building capacity.

In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic solution creates custom store- and aisle-ready pallets.

Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit out-of-stocks and increasing the speed of stocking and unloading." More importantly, the company hopes to further cut expenses and remove even more unskilled labor from its supply chain.

This solution follows tests of similar automated warehouse solutions at a Walmart consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter, Calif.

Walmart plans to implement this technology in 25 of its 42 RDCs.

"Though very few Walmart customers will ever see into our warehouses, they'll still be able to witness an industry-leading change, each time they find a product on shelves," said Joe Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve all the complexities of a global supply chain, but we plan to keep changing the game as we use technology to transform the way we work and lead our business into the future."

[Jul 05, 2021] Pandemic Wave of Automation May Be Bad News for Workers

Jul 05, 2021 | www.nytimes.com

But wait: wasn't this recent rise in wages in real terms being propagandized as a new boom for the working class in the USA by the MSM until some days ago?

[Jul 04, 2021] Pandemic Wave of Automation May Be Bad News for Workers by Ben Casselman

Jul 03, 2021 | www.msn.com

And in the drive-through lane at Checkers near Atlanta, requests for Big Buford burgers and Mother Cruncher chicken sandwiches may be fielded not by a cashier in a headset, but by a voice-recognition algorithm.

Sign up for The Morning newsletter from The New York Times

An increase in automation, especially in service industries, may prove to be an economic legacy of the pandemic. Businesses from factories to fast-food outlets to hotels turned to technology last year to keep operations running amid social distancing requirements and contagion fears. Now the outbreak is ebbing in the United States, but the difficulty in hiring workers -- at least at the wages that employers are used to paying -- is providing new momentum for automation.

Technological investments that were made in response to the crisis may contribute to a post-pandemic productivity boom, allowing for higher wages and faster growth. But some economists say the latest wave of automation could eliminate jobs and erode bargaining power, particularly for the lowest-paid workers, in a lasting way.

© Lynsey Weatherspoon for The New York Times The artificial intelligence system that feeds information to the kitchen at a Checkers.

"Once a job is automated, it's pretty hard to turn back," said Casey Warman, an economist at Dalhousie University in Nova Scotia who has studied automation in the pandemic .

https://www.dianomi.com/smartads.epl?id=3533

The trend toward automation predates the pandemic, but it has accelerated at what is proving to be a critical moment. The rapid reopening of the economy has led to a surge in demand for waiters, hotel maids, retail sales clerks and other workers in service industries that had cut their staffs. At the same time, government benefits have allowed many people to be selective in the jobs they take. Together, those forces have given low-wage workers a rare moment of leverage , leading to higher pay , more generous benefits and other perks.

Automation threatens to tip the advantage back toward employers, potentially eroding those gains. A working paper published by the International Monetary Fund this year predicted that pandemic-induced automation would increase inequality in coming years, not just in the United States but around the world.

"Six months ago, all these workers were essential," said Marc Perrone, president of the United Food and Commercial Workers, a union representing grocery workers. "Everyone was calling them heroes. Now, they're trying to figure out how to get rid of them."

Checkers, like many fast-food restaurants, experienced a jump in sales when the pandemic shut down most in-person dining. But finding workers to meet that demand proved difficult -- so much so that Shana Gonzales, a Checkers franchisee in the Atlanta area, found herself back behind the cash register three decades after she started working part time at Taco Bell while in high school.

© Lynsey Weatherspoon for The New York Times Technology is easing pressure on workers and speeding up service when restaurants are chronically understaffed, Ms. Gonzales said.

"We really felt like there has to be another solution," she said.

So Ms. Gonzales contacted Valyant AI, a Colorado-based start-up that makes voice recognition systems for restaurants. In December, after weeks of setup and testing, Valyant's technology began taking orders at one of Ms. Gonzales's drive-through lanes. Now customers are greeted by an automated voice designed to understand their orders -- including modifications and special requests -- suggest add-ons like fries or a shake, and feed the information directly to the kitchen and the cashier.

The rollout has been successful enough that Ms. Gonzales is getting ready to expand the system to her three other restaurants.

"We'll look back and say why didn't we do this sooner," she said.

The push toward automation goes far beyond the restaurant sector. Hotels, retailers , manufacturers and other businesses have all accelerated technological investments. In a survey of nearly 300 global companies by the World Economic Forum last year, 43 percent of businesses said they expected to reduce their work forces through new uses of technology.

Some economists see the increased investment as encouraging. For much of the past two decades, the U.S. economy has struggled with weak productivity growth, leaving workers and stockholders to compete over their share of the income -- a game that workers tended to lose. Automation may harm specific workers, but if it makes the economy more productive, that could be good for workers as a whole, said Katy George, a senior partner at McKinsey, the consulting firm.

She cited the example of a client in manufacturing who had been pushing his company for years to embrace augmented-reality technology in its factories. The pandemic finally helped him win the battle: With air travel off limits, the technology was the only way to bring in an expert to help troubleshoot issues at a remote plant.

"For the first time, we're seeing that these technologies are both increasing productivity, lowering cost, but they're also increasing flexibility," she said. "We're starting to see real momentum building, which is great news for the world, frankly."

Other economists are less sanguine. Daron Acemoglu of the Massachusetts Institute of Technology said that many of the technological investments had just replaced human labor without adding much to overall productivity.

In a recent working paper , Professor Acemoglu and a colleague concluded that "a significant portion of the rise in U.S. wage inequality over the last four decades has been driven by automation" -- and he said that trend had almost certainly accelerated in the pandemic.

"If we automated less, we would not actually have generated that much less output but we would have had a very different trajectory for inequality," Professor Acemoglu said.

Ms. Gonzales, the Checkers franchisee, isn't looking to cut jobs. She said she would hire 30 people if she could find them. And she has raised hourly pay to about $10 for entry-level workers, from about $9 before the pandemic. Technology, she said, is easing pressure on workers and speeding up service when restaurants are chronically understaffed.

"Our approach is, this is an assistant for you," she said. "This allows our employee to really focus" on customers.

Ms. Gonzales acknowledged she could fully staff her restaurants if she offered $14 to $15 an hour to attract workers. But doing so, she said, would force her to raise prices so much that she would lose sales -- and automation allows her to take another course.

Rob Carpenter, Valyant's chief executive, noted that at most restaurants, taking drive-through orders is only part of an employee's responsibilities. Automating that task doesn't eliminate a job; it makes the job more manageable.

"We're not talking about automating an entire position," he said. "It's just one task within the restaurant, and it's gnarly, one of the least desirable tasks."

But technology doesn't have to take over all aspects of a job to leave workers worse off. If automation allows a restaurant that used to require 10 employees a shift to operate with eight or nine, that will mean fewer jobs in the long run. And even in the short term, the technology could erode workers' bargaining power.

"Often you displace enough of the tasks in an occupation and suddenly that occupation is no more," Professor Acemoglu said. "It might kick me out of a job, or if I keep my job I'll get lower wages."

At some businesses, automation is already affecting the number and type of jobs available. Meltwich, a restaurant chain that started in Canada and is expanding into the United States, has embraced a range of technologies to cut back on labor costs. Its grills no longer require someone to flip burgers -- they grill both sides at once, and need little more than the press of a button.

"You can pull a less-skilled worker in and have them adapt to our system much easier," said Ryan Hillis, a Meltwich vice president. "It certainly widens the scope of who you can have behind that grill."

With more advanced kitchen equipment, software that allows online orders to flow directly to the restaurant and other technological advances, Meltwich needs only two to three workers on a shift, rather than three or four, Mr. Hillis said.

Such changes, multiplied across thousands of businesses in dozens of industries, could significantly change workers' prospects. Professor Warman, the Canadian economist, said technologies developed for one purpose tend to spread to similar tasks, which could make it hard for workers harmed by automation to shift to another occupation or industry.

"If a whole sector of labor is hit, then where do those workers go?" Professor Warman said. Women, and to a lesser degree people of color, are likely to be disproportionately affected, he added.

The grocery business has long been a source of steady, often unionized jobs for people without a college degree. But technology is changing the sector. Self-checkout lanes have reduced the number of cashiers; many stores have simple robots to patrol aisles for spills and check inventory; and warehouses have become increasingly automated. Kroger in April opened a 375,000-square-foot warehouse with more than 1,000 robots that bag groceries for delivery customers. The company is even experimenting with delivering groceries by drone.

Other companies in the industry are doing the same. Jennifer Brogan, a spokeswoman for Stop & Shop, a grocery chain based in New England, said that technology allowed the company to better serve customers -- and that it was a competitive necessity.

"Competitors and other players in the retail space are developing technologies and partnerships to reduce their costs and offer improved service and value for customers," she said. "Stop & Shop needs to do the same."

In 2011, Patrice Thomas took a part-time job in the deli at a Stop & Shop in Norwich, Conn. A decade later, he manages the store's prepared foods department, earning around $40,000 a year.

Mr. Thomas, 32, said that he wasn't concerned about being replaced by a robot anytime soon, and that he welcomed technologies making him more productive -- like more powerful ovens for rotisserie chickens and blast chillers that quickly cool items that must be stored cold.

But he worries about other technologies -- like automated meat slicers -- that seem to enable grocers to rely on less experienced, lower-paid workers and make it harder to build a career in the industry.

"The business model we seem to be following is we're pushing toward automation and we're not investing equally in the worker," he said. "Today it's, 'We want to get these robots in here to replace you because we feel like you're overpaid and we can get this kid in there and all he has to do is push this button.'"

[Jun 26, 2021] Replace man pages with Tealdeer on Linux - Opensource.com

Jun 22, 2021 | opensource.com

Replace man pages with Tealdeer on Linux Tealdeer is a Rust implementation of tldr, which provides easy-to-understand information about common commands. 21 Jun 2021 Sudeshna Sur (Red Hat, Correspondent) Feed 10 up Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More Linux resources

Man pages were my go-to resource when I started exploring Linux. Certainly, man is the most frequently used command when a beginner starts getting familiar with the world of the command line. But man pages, with their extensive lists of options and arguments, can be hard to decipher, which makes it difficult to understand whatever you wanted to know. If you want an easier solution with example-based output, I think tldr is the best option. What's Tealdeer?

Tealdeer is a wonderful implementation of tldr in Rust. It's a community-driven man page that gives very simple examples of how commands work. The best part about Tealdeer is that it has virtually every command you would normally use.

Install Tealdeer

On Linux, you can install Tealdeer from your software repository. For example, on Fedora :

$ sudo dnf install tealdeer

On macOS, use MacPorts or Homebrew .

Alternately, you can build and install the tool with Rust's Cargo package manager:

$ cargo install tealdeer
Use Tealdeer

Entering tldr --list returns the list of man pages tldr supports, like touch , tar , dnf , docker , zcat , zgrep , and so on:

$ tldr --list
2to3
7z
7za
7zr
[
a2disconf
a2dismod
a2dissite
a2enconf
a2enmod
a2ensite
a2query
[ ... ]

Using tldr with a specific command (like tar ) shows example-based man pages that describe all the options that you can do with that command:

$ tldr tar

Archiving utility.
Often combined with a compression method, such as gzip or bzip2.
More information: < https: // www.gnu.org / software / tar > .

[ c ] reate an archive and write it to a [ f ] ile:

tar cf target.tar file1 file2 file3

[ c ] reate a g [ z ] ipped archive and write it to a [ f ] ile:

tar czf target.tar.gz file1 file2 file3

[ c ] reate a g [ z ] ipped archive from a directory using relative paths:

tar czf target.tar.gz --directory =path / to / directory .

E [ x ] tract a ( compressed ) archive [ f ] ile into the current directory [ v ] erbosely:

tar xvf source.tar [ .gz | .bz2 | .xz ]

E [ x ] tract a ( compressed ) archive [ f ] ile into the target directory:

tar xf source.tar [ .gz | .bz2 | .xz ] --directory =directory

[ c ] reate a compressed archive and write it to a [ f ] ile, using [ a ] rchive suffix to determine the compression program:

tar caf target.tar.xz file1 file2 file3

To control the cache:

$ tldr --update
$ tldr --clear-cache

You can give Tealdeer output some color with the --color option, setting it to always , auto , and never . The default is auto , but I like the added context color provides, so I make mine permanent with this addition to my ~/.bashrc file:

alias tldr='tldr --color always'
Conclusion

The beauty of Tealdeer is you don't need a network connection to use it, except when you're updating the cache. So, even if you are offline, you can still search for and learn about your new favorite command. For more information, consult the tool's documentation .

Would you use Tealdeer? Or are you already using it? Let us know what you think in the comments below.

[Jun 19, 2021] How To Comment Out Multiple Lines At Once In Vim Editor - OSTechNix

Jun 19, 2021 | ostechnix.com

Method 1:

Step 1: Open the file using vim editor with command:

$ vim ostechnix.txt

Step 2: Highlight the lines that you want to comment out. To do so, go to the line you want to comment and move the cursor to the beginning of a line.

Press SHIFT+V to highlight the whole line after the cursor. After highlighting the first line, press UP or DOWN arrow keys or k or j to highlight the other lines one by one.

Here is how the lines will look like after highlighting them.

Highlight lines in Vim editor

Step 3: After highlighting the lines that you want to comment out, type the following and hit ENTER key:

:s/^/# /

Please mind the space between # and the last forward slash ( / ).

Now you will see the selected lines are commented out i.e. # symbol is added at the beginning of all lines.

Comment out multiple lines at once in Vim editor

Here, s stands for "substitution" . In our case, we substitute the caret symbol ^ (in the beginning of the line) with # (hash). As we all know, we put # in-front of a line to comment it out.

Step 4: After commenting the lines, you can type :w to save the changes or type :wq to save the file and exit.

Let us move on to the next method.

Method 2:

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Set line numbers by typing the following in vim editor and hit ENTER.

:set number
Set line numbers in Vim

Step 3: Then enter the following command:

:1,4s/^/#

In this case, we are commenting out the lines from 1 to 4 . Check the following screenshot. The lines from 1 to 4 have been commented out.

Comment out multiple lines at once in Vim editor

Step 4: Finally, unset the line numbers.

:set nonumber

Step 5: To save the changes type :w or :wq to save the file and exit.

The same procedure can be used for uncommenting the lines in a file. Open the file and set the line numbers as shown in Step 2. Finally type the following command and hit ENTER at the Step 3:

:1,3s/^#/

After uncommenting the lines, simply remove the line numbers by entering the following command:

:set nonumber

Let us go ahead and see the third method.

Method 3:

This one is similar to Method 2 but slightly different.

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Set line numbers by typing:

:set number

Step 3: Type the following to comment out the lines.

:1,4s/^/# /

The above command will comment out lines from 1 to 4.

Comment out multiple lines at once in Vim editor

Step 4: Finally, unset the line numbers by typing the following.

:set nonumber
Method 4:

This method is suggested by one of our reader Mr.Anand Nande in the comment section below.

Step 1: Open file in vim editor:

$ vim ostechnix.txt

Step 2: Go to the line you want to comment. Press Ctrl+V to enter into 'Visual block' mode.

Enter into Visual block mode in Vim editor

Step 3: Press UP or DOWN arrow or the letter k or j in your keyboard to select all the lines that you want to be commented in your file.

Select the lines to comment in Vim

Step 4: Press Shift+i to enter into INSERT mode. This will place your cursor on the first line.

Step 5: And then insert # (press Shift+3 ) before your first line.

Insert hash symbol before a line in Vim

Step 6: Finally, press ESC key. This will insert # on all other selected lines.

Comment out multiple lines at once in Vim editor

As you see in the above screenshot, all other selected lines including the first line are commented out.

Method 5:

This method is suggested by one of our Twitter follower and friend Mr.Tim Chase . We can even target lines to comment out by regex . In other words, we can comment all the lines that contains a specific word.

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Type the following and press ENTER key:

:g/\Linux/s/^/# /

The above command will comment out all lines that contains the word "Linux" . Replace "Linux" with a word of your choice.

Comment out all lines that contains a specific word in Vim editor

As you see in the above output, all the lines have the word "Linux" , hence all of them are commented out.

And, that's all for now. I hope this was useful. If you know any other method than the given methods here, please let me know in the comment section below. I will check and add them in the guide.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=1479930931&adf=2055659237&pi=t.aa~a.2234679917~i.79~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1624141039&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fcomment-multiple-lines-vim-editor%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8Ku2hgYQ-faf04DIxaZIEioA0oaPENzpMRwAvx5DdKl3WQFLBejYeBeOk4vBFOIUHsiK6A2cxarqfp0&uach=WyJXaW5kb3dzIiwiNi4xIiwieDg2IiwiIiwiOTEuMC44NjQuNDgiLFtdXQ..&dt=1624140995080&bpp=3&bdt=1704&idt=3&shv=r20210616&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Debfd4fb7c45c6c54-2271b162b47a0088%3AT%3D1624140991%3ART%3D1624140991%3AS%3DALNI_MYs-KRs82ESdaW9SqvTz0LdDn4aqw&prev_fmts=728x90%2C780x280%2C340x280%2C340x280%2C0x0%2C780x280%2C340x99%2C1519x762&nras=5&correlator=3214925991239&frm=20&pv=1&ga_vid=1620222149.1624140994&ga_sid=1624140994&ga_hid=1585543798&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=10272&biw=1519&bih=762&scr_x=0&scr_y=7239&eid=31060930%2C31061335&oid=3&psts=AGkb-H8UPcBDhqITRwjTEyaYjcsmTPBBB9-bKOCf5dU6ZjyXM_6d05U7bldpjo0O5VLXEx7awwc0KWKBEPwN%2CAGkb-H9ggMPM9ggYLcULWRyNg8Y1iDWLRzXLF71BPFCEPuIeMGaCEj1g81N-YmDTJtGAcCbFDWPYeaCoglq93g%2CAGkb-H_QCC0JmQ1BW2LVFWvGqsVvQRxvhIdC7I-I3wZ7_80Utt0U7Ef1bXvSFsCNVC9s8QIi8KLJOW5wWg2oVQ%2CAGkb-H-XM_O8cXp1AEMOS9B3OIHcuTK0k76S7RzpQkcHybZRRG0n-ps01q10AVEcKWdflTgafC47Cmzytdo%2CAGkb-H8P1-25rKkLXj21OtvZxC5syCIAnKUouYAUGDphNQJfDg5WgM38b5K51AE6BCGVqiuTDW0S2PpLMxDLVw%2CAGkb-H9LN-Y7NrJo_tIwtzBt6UcyBgIbsto0eamWWufKBQPkf1n_1eelsKy3kz-f4BY34amgaBPKBfGLpdQ&pvsid=1797588606635582&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&cms=2&fu=128&bc=31&jar=2021-06-08-17&ifi=7&uci=a!7&btvi=5&fsb=1&xpc=J8GplgNl8K&p=https%3A//ostechnix.com&dtd=44434

Also, have a look at the comment section below. One of our visitor has shared a good guide about Vim usage.

Related read:

[Jun 12, 2021] Ctrl-R -- Find and run a previous command

Jun 12, 2021 | anto.online

What if you needed to execute a specific command again, one which you used a while back? And you can't remember the first character, but you can remember you used the word "serve".

You can use the up key and keep on pressing up until you find your command. (That could take some time)

Or, you can enter CTRL + R and type few keywords you used in your last command. Linux will help locate your command, requiring you to press enter once you found your command. The example below shows how you can enter CTRL + R and then type "ser" to find the previously run "PHP artisan serve" command. For sure, this tip will help you speed up your command-line experience.

anto@odin:~$ 
(reverse-i-search)`ser': php artisan serve

You can also use the history command to output all the previously stored commands. The history command will give a list that is ordered in ascending relative to its execution.

[Jun 12, 2021] The use of PS4= LINENO in debugging bash scripts

Jun 10, 2021 | www.redhat.com

Exit status

In Bash scripting, $? prints the exit status. If it returns zero, it means there is no error. If it is non-zero, then you can conclude the earlier task has some issue.

A basic example is as follows:

$ cat myscript.sh
           #!/bin/bash
           mkdir learning
           echo $?

If you run the above script once, it will print 0 because the directory does not exist, therefore the script will create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:

$ sh myscript.sh
mkdir: cannot create directory 'learning': File exists
1
In the cloud Best practices

It is always recommended to enable the debug mode by adding the -e option to your shell script as below:

$ cat test3.sh
!/bin/bash
set -x
echo "hello World"
mkdiir testing
 ./test3.sh
+ echo 'hello World'
hello World
+ mkdiir testing
./test3.sh: line 4: mkdiir: command not found

You can write a debug function as below, which helps to call it anytime, using the example below:

$ cat debug.sh
#!/bin/bash
_DEBUG="on"
function DEBUG()
{
 [ "$_DEBUG" == "on" ] && $@
}
DEBUG echo 'Testing Debudding'
DEBUG set -x
a=2
b=3
c=$(( $a + $b ))
DEBUG set +x
echo "$a + $b = $c"

Which prints:

$ ./debug.sh
Testing Debudding
+ a=2
+ b=3
+ c=5
+ DEBUG set +x
+ '[' on == on ']'
+ set +x
2 + 3 = 5
Standard error redirection

You can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute it in normal Bash commands, as demonstrated below:

$ mkdir users 2> errors.txt
$ cat errors.txt
mkdir: cannot create directory "˜users': File exists

Most of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4 option (supported with Bash 4.1 or later). Example below:

$ cat test3.sh
#!/bin/bash
PS4='LINENO:'

set -x
echo "hello World"
mkdiir testing

You can easily see the line number while reading the errors:

$ /test3.sh
5: echo 'hello World'
hello World
6: mkdiir testing
./test3.sh: line 6: mkdiir: command not found

[Jun 12, 2021] 7 'dmesg' Commands for Troubleshooting and Collecting Information of Linux Systems

Jun 09, 2021 | www.tecmint.com

List all Detected Devices

To discover which hard disks has been detected by kernel, you can search for the keyword " sda " along with " grep " like shown below.

[[email protected] ~]# dmesg | grep sda

[    1.280971] sd 2:0:0:0: [sda] 488281250 512-byte logical blocks: (250 GB/232 GiB)
[    1.281014] sd 2:0:0:0: [sda] Write Protect is off
[    1.281016] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    1.281039] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.359585]  sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
[    1.360052] sd 2:0:0:0: [sda] Attached SCSI disk
[    2.347887] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[   22.928440] Adding 3905532k swap on /dev/sda6.  Priority:-1 extents:1 across:3905532k FS
[   23.950543] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro
[   24.134016] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[   24.330762] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null)
[   24.561015] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null)

NOTE : The "˜sda' first SATA hard drive, "˜sdb' is the second SATA hard drive and so on. Search with "˜hda' or "˜hdb' in the case of IDE hard drive.

[Jun 12, 2021] What is your Linux server hardware decommissioning process

May 20, 2021
Jun 10, 2021 | www.redhat.com

by Ken Hess (Red Hat)

Even small to medium-sized companies have some sort of governance surrounding server decommissioning. They might not call it decommissioning but the process usually goes something like the following:

[Jun 12, 2021] A Big Chunk of the Internet Goes Offline Because of a Faulty CDN Provider

Jun 10, 2021 | tech.slashdot.org

(techcrunch.com) 154 Countless popular websites including Reddit, Spotify, Twitch, Stack Overflow, GitHub, gov.uk, Hulu, HBO Max, Quora, PayPal, Vimeo, Shopify, Stripe, and news outlets CNN, The Guardian, The New York Times, BBC and Financial Times are currently facing an outage . A glitch at Fastly, a popular CDN provider, is thought to be the reason, according to a product manager at Financial Times. Fastly has confirmed it's facing an outage on its status website.

[Jun 12, 2021] 12 Useful Linux date Command Examples

Jun 10, 2021 | vitux.com

Displaying Date From String

We can display the formatted date from the date string provided by the user using the -d or ""date option to the command. It will not affect the system date, it only parses the requested date from the string. For example,

$ date -d "Feb 14 1999"

Parsing string to date.

$ date --date="09/10/1960"

Parsing string to date.

Displaying Upcoming Date & Time With -d Option

Aside from parsing the date, we can also display the upcoming date using the -d option with the command. The date command is compatible with words that refer to time or date values such as next Sun, last Friday, tomorrow, yesterday, etc. For examples,

Displaying Next Monday Date

$ date -d "next Mon"

Displaying upcoming date.

Displaying Past Date & Time With -d Option

Using the -d option to the command we can also know or view past date. For examples,

Displaying Last Friday Date
$ date -d "last Fri"

Displaying past date

Parse Date From File

If you have a record of the static date strings in the file we can parse them in the preferred date format using the -f option with the date command. In this way, you can format multiple dates using the command. In the following example, I have created the file that contains the list of date strings and parsed it with the command.

$ date -f datefile.txt

Parse date from the file.

Setting Date & Time on Linux

We can not only view the date but also set the system date according to your preference. For this, you need a user with Sudo access and you can execute the command in the following way.

$ sudo date -s "Sun 30 May 2021 07:35:06 PM PDT"
Display File Last Modification Time

We can check the file's last modification time using the date command, for this we need to add the -r option to the command. It helps in tracking files when it was last modified. For example,

$ date -r /etc/hosts

[Jun 12, 2021] Sidewalk Robots are Now Delivering Food in Miami

Notable quotes:
"... Florida Sun-Sentinel ..."
"... [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] ..."
"... the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries... ..."
Jun 06, 2021 | hardware.slashdot.org

18-inch tall robots on four wheels zipping across city sidewalks "stopped people in their tracks as they whipped out their camera phones," reports the Florida Sun-Sentinel .

"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in the way food comes to our doors and our tables." The semiautonomous vehicles were engineered by Kiwibot, a company started in 2017 to game-change the food delivery landscape...

In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program funded by the Knight Foundation. The program is driven to understand how residents and consumers will interact with this type of technology, especially as the trend of robot servers grows around the country.

And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and several others, in the next couple of weeks...

"Our program is completely focused on the residents of Miami-Dade County and the way they interact with this new technology. Whether it's interacting directly or just sharing the space with the delivery bots,"

said Carlos Cruz-Casas, with the county's Department of Transportation...

Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are placed on the front, back and sides of the vehicle, which the supervisors can view on a computer screen. [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] If crossing the street is necessary, the robot will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to allow deliveries up to a mile and a half away so robots can make it to their destinations in 30 minutes or less.

Earlier Kiwi tested its sidewalk-travelling robots around the University of California at Berkeley, where at least one of its robots burst into flames . But the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries...

"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles and Santa Monica by working with the Shopify app to connect businesses that want to employ their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is making it possible for Miami-Dade County restaurants to sign on for free."

A video shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog on a leash, and at least one potential restaurant customer looking forward to no longer having to tip human food-delivery workers.

... ... ...

[Jun 08, 2021] Technical Evaluations- 6 questions to ask yourself

Average but still useful enumeration of factors what should be considered. One question stands out "Is that SaaS app really cheaper than more headcount?" :-)
Notable quotes:
"... You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter. ..."
"... Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question. ..."
"... Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations. ..."
Apr 21, 2021 | www.redhat.com

When introducing a new tool, programming language, or dependency into your environment, what steps do you take to evaluate it? In this article, I will walk through a six-question framework I use to make these determinations.

What problem am I trying to solve?

We all get caught up in the minutiae of the immediate problem at hand. An honest, critical assessment helps divulge broader root causes and prevents micro-optimizations.

[ You might also like: Six deployment steps for Linux services and their related tools ]

Let's say you are experiencing issues with your configuration management system. Day-to-day operational tasks are taking longer than they should, and working with the language is difficult. A new configuration management system might alleviate these concerns, but make sure to take a broader look at this system's context. Maybe switching from virtual machines to immutable containers eases these issues and more across your environment while being an equivalent amount of work. At this point, you should explore the feasibility of more comprehensive solutions as well. You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter.

This intellectual exercise helps you drill down to the root causes and solve core issues, not the symptoms of larger problems. This is not always going to be possible, but be intentional about making this decision.

In the cloud Does this tool solve that problem?

Now that we have identified the problem, it is time for critical evaluation of both ourselves and the selected tool.

A particular technology might seem appealing because it is new because you read a cool blog post about it or you want to be the one giving a conference talk. Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question.

What am I giving up?

The tool will, in fact, solve the problem, and we know we're solving the right problem, but what are the tradeoffs?

These considerations can be purely technical. Will the lack of observability tooling prevent efficient debugging in production? Does the closed-source nature of this tool make it more difficult to track down subtle bugs? Is managing yet another dependency worth the operational benefits of using this tool?

Additionally, include the larger organizational, business, and legal contexts that you operate under.

Are you giving up control of a critical business workflow to a third-party vendor? If that vendor doubles their API cost, is that something that your organization can afford and is willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of proprietary information? Does the software licensing make this difficult to use commercially?

While not simple questions to answer, taking the time to evaluate this upfront will save you a lot of pain later on.

Is the project or vendor healthy?

This question comes with the addendum "for the balance of your requirements." If you only need a tool to get your team over a four to six-month hump until Project X is complete, this question becomes less important. If this is a multi-year commitment and the tool drives a critical business workflow, this is a concern.

When going through this step, make use of all available resources. If the solution is open source, look through the commit history, mailing lists, and forum discussions about that software. Does the community seem to communicate effectively and work well together, or are there obvious rifts between community members? If part of what you are purchasing is a support contract, use that support during the proof-of-concept phase. Does it live up to your expectations? Is the quality of support worth the cost?

Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as well. Something might hit the front page of a news aggregator and receive attention for a few days, but a deeper look might reveal that only a couple of core developers are actually working on a project, and they've had difficulty finding outside contributions. Maybe a tool is open source, but a corporate-funded team drives core development, and support will likely cease if that organization abandons the project. Perhaps the API has changed every six months, causing a lot of pain for folks who have adopted earlier versions.

What are the risks?

As a technologist, you understand that nothing ever goes as planned. Networks go down, drives fail, servers reboot, rows in the data center lose power, entire AWS regions become inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.

Ask yourself how this tooling could fail and what the impact would be. If you are adding a security vendor product to your CI/CD pipeline, what happens if the vendor goes down?

Kubernetes and OpenShift

This brings up both technical and business considerations. Do the CI/CD pipelines simply time out because they can't reach the vendor, or do you have it "fail open" and allow the pipeline to complete with a warning? This is a technical problem but ultimately a business decision. Are you willing to go to production with a change that has bypassed the security scanning in this scenario?

Obviously, this task becomes more difficult as we increase the complexity of the system. Thankfully, sites like k8s.af consolidate example outage scenarios. These public postmortems are very helpful for understanding how a piece of software can fail and how to plan for that scenario.

What are the costs?

The primary considerations here are employee time and, if applicable, vendor cost. Is that SaaS app cheaper than more headcount? If you save each developer on the team two hours a day with that new CI/CD tool, does it pay for itself over the next fiscal year?

Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations.

[ A free guide from Red Hat: 5 steps to automate your business . ]

Wrap up

I hope you've found this framework insightful, and I encourage you to incorporate it into your own decision-making processes. There is no one-size-fits-all framework that works for every decision. Don't forget that, sometimes, you might need to go with your gut and make a judgment call. However, having a standardized process like this will help differentiate between those times when you can critically analyze a decision and when you need to make that leap.

[Jun 08, 2021] How to use TEE command in Linux

Apr 21, 2021 | linuxtechlab.com

3- Write output to multiple files

With tee command, we have option to copy the output to multiple files as well & this can be done as follows,

# free -m | tee output1.txt output2.txt

... ... ...

5- Ignore any interrupts

There are instances where we might face some interruptions while running a command but we can suppress that with the help of '-i' option,

# ping -c 3 | tee -i output1.txt

[Jun 08, 2021] Recovery LVM Data from RAID

May 24, 2021 | blog.dougco.com

Recovery LVM Data from RAID – Doug's Blog

We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned out.

Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see the data. You have to hook them all up to a box and reassemble the RAID.

We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS. You can use parted to see what drives are now being seen by your linux system:

parted -l | grep 'raid\|sd'

Then using that output, we assembled the drives into a software array:

mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2

If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.

If the last command went well, you can see the built array like so:

root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding

Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails, your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:

pvdisplay /dev/md0

Gives us:

-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3

Then we find the logical volume:

lvdisplay /dev/vg0

Gives us:

-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384

-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384

We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.

lvscan

Show us things are inactive currently:

inactive '/dev/vg0/syslv' [1.00 GB] inherit
inactive '/dev/vg0/lv0' [928.00 GB] inherit

So we set them active with:

vgchange vg0 -a y

And doing lvscan again shows:

ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit

Now we can mount with:

mount /dev/vg0/lv0 /mnt

And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.

[Jun 08, 2021] Too many systemd Created slice messages !

Aug 04, 2015 | blog.dougco.com

Installing the recent linux version seems to come with a default setting of flooding the /var/log/messages with entirely annoying duplicitous messages like:

systemd: Created slice user-0.slice.
systemd: Starting Session 1013 of user root.
systemd: Started Session 1013 of user root.
systemd: Created slice user-0.slice.
systemd: Starting Session 1014 of user root.
systemd: Started Session 1014 of user root.

Here is how I got rid of these:

vi /etc/systemd/system.conf

And then uncomment LogLevel and make it: LogLevel=notice

  1 # This file is part of systemd.
  2 #
  3 #  systemd is free software; you can redistribute it and/or modify it
  4 #  under the terms of the GNU Lesser General Public License as published by
  5 #  the Free Software Foundation; either version 2.1 of the License, or
  6 #  (at your option) any later version.
  7 #
  8 # Entries in this file show the compile time defaults.
  9 # You can change settings by editing this file.
 10 # Defaults can be restored by simply deleting this file.
 11 #
 12 # See systemd-system.conf(5) for details.
 13
 14 [Manager]
 15 LogLevel=notice
 16 #LogTarget=journal-or-kmsg

Then:

systemctl restart rsyslog
systemd-analyze set-log-level notice

[Jun 08, 2021] Basic scripting on Unix and Linux by Sandra Henry-Stocker

Mar 10, 2021 | www.networkworld.com

... ... ...

Different ways to loop

There are a number of ways to loop within a script. Use for when you want to loop a preset number of times. For example:

#!/bin/bash

for day in Sun Mon Tue Wed Thu Fri Sat
do
    echo $day
done

or

#!/bin/bash

for letter in {a..z}
do
   echo $letter
done

Use while when you want to loop as long as some condition exists or doesn't exist.

#!/bin/bash

n=1

while [ $n -le 4 ]
do
    echo $n
    ((n++))
done
Using case statements

Case statements allow your scripts to react differently depending on what values are being examined. In the script below, we use different commands to extract the contents of the file provided as an argument by identifying the file type.

#!/bin/bash

if [ $# -eq 0 ]; then
    echo -n "filename> "
    read filename
else
    filename=$1
fi

if [ ! -f "$filename" ]; then
    echo "No such file: $filename"
    exit
fi

case $filename in
    *.tar)      tar xf $filename;;
    *.tar.bz2)  tar xjf $filename;;
    *.tbz)      tar xjf $filename;;
    *.tbz2)     tar xjf $filename;;
    *.tgz)      tar xzf $filename;;
    *.tar.gz)   tar xzf $filename;;
    *.gz)       gunzip $filename;;
    *.bz2)      bunzip2 $filename;;
    *.zip)      unzip $filename;;
    *.Z)        uncompress $filename;;
    *.rar)      rar x $filename ;;
    *)          echo "No extract option for $filename"
esac

Note that this script also prompts for a file name if none was provided and then checks to make sure that the file specified actually exists. Only after that does it bother with the extraction.

Reacting to errors

You can detect and react to errors within scripts and, in doing so, avoid other errors. The trick is to check the exit codes after commands are run. If an exit code has a value other than zero, an error occurred. In this script, we look to see if Apache is running, but send the output from the check to /dev/null . We then check to see if the exit code isn't equal to zero as this would indicate that the ps command did not get a response. If the exit code is not zero, the script informs the user that Apache isn't running.

#!/bin/bash

ps -ef | grep apache2 > /dev/null
if [ $? != 0 ]; then
    echo Apache is not running
    exit
fi

[Jun 08, 2021] Bang commands: two potentially useful shortcuts for command line -- !! and !$ by Nikolai Bezroukov

softpanorama.org

Those shortcuts belong to the class of commands known as bang commands . Internet search for this term provides a wealth of additional information (which probably you do not need ;-), I will concentrate on just most common and potentially useful in the current command line environment bang commands. Of them !$ is probably the most useful and definitely is the most widely used. For many sysadmins it is the only bang command that is regularly used.

  1. !! is the bang command that re-executes the last command . This command is used mainly as a shortcut sudo !! -- elevation of privileges after your command failed on your user account. For example:

    fgrep 'kernel' /var/log/messages # it will fail due to unsufficient privileges, as /var/log directory is not readable by ordinary user
    sudo !! # now we re-execute the command with elevated privileges
    
  2. !$ puts into the current command line the last argument from previous command . For example:

    mkdir -p /tmp/Bezroun/Workdir
    cd !$
    
    In this example the last command is equivalent to the command cd /tmp/Bezroun/Workdir. Please try this example. It is a pretty neat trick.

NOTE: You can also work with individual arguments using numbers.

For example:
cp !:2 !:3 # picks up  the first and the second argument from the previous command
For this and other bang command capabilities, copying fragments of the previous command line using mouse is much more convenient, and you do not need to remember extra staff. After all, band commands were created before mouse was available, and most of them reflect the realities and needs of this bygone era. Still I met sysadmins that use this and some additional capabilities like !!:s^<old>^<new> (which replaces the string 'old' with the string 'new" and re-executes previous command) even now.

The same is true for !* -- all arguments of the last command. I do not use them and have had troubles writing this part of this post, correcting it several times to make it right 4/0

Nowadays CTRL+R activates reverse search, which provides an easier way to navigate through your history then capabilities in the past provided by band commands.

[Jun 07, 2021] Sidewalk Robots are Now Delivering Food in Miami

Notable quotes:
"... Florida Sun-Sentinel ..."
"... [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] ..."
"... the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries... ..."
Jun 07, 2021 | hardware.slashdot.org

18-inch tall robots on four wheels zipping across city sidewalks "stopped people in their tracks as they whipped out their camera phones," reports the Florida Sun-Sentinel .

"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in the way food comes to our doors and our tables." The semiautonomous vehicles were engineered by Kiwibot, a company started in 2017 to game-change the food delivery landscape...

In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program funded by the Knight Foundation. The program is driven to understand how residents and consumers will interact with this type of technology, especially as the trend of robot servers grows around the country.

And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and several others, in the next couple of weeks...

"Our program is completely focused on the residents of Miami-Dade County and the way they interact with this new technology. Whether it's interacting directly or just sharing the space with the delivery bots,"

said Carlos Cruz-Casas, with the county's Department of Transportation...

Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are placed on the front, back and sides of the vehicle, which the supervisors can view on a computer screen. [A spokesperson says later in the article "there is always a remote and in-field team looking for the robot."] If crossing the street is necessary, the robot will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to allow deliveries up to a mile and a half away so robots can make it to their destinations in 30 minutes or less.

Earlier Kiwi tested its sidewalk-travelling robots around the University of California at Berkeley, where at least one of its robots burst into flames . But the Sun-Sentinel reports that "In about six months, at least 16 restaurants came on board making nearly 70,000 deliveries...

"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles and Santa Monica by working with the Shopify app to connect businesses that want to employ their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is making it possible for Miami-Dade County restaurants to sign on for free."

A video shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog on a leash, and at least one potential restaurant customer looking forward to no longer having to tip human food-delivery workers.

... ... ...

[Jun 06, 2021] Boston Dynamics Debuts Robot Aimed at Rising Warehouse Automation

Jun 06, 2021 | www.wsj.com

Customers wouldn't have to train the algorithm on their own boxes because the robot was made to recognize boxes of different sizes, textures and colors. For example, it can recognize both shrink-wrapped cases and cardboard boxes.

... Stretch is part of a growing market of warehouse robots made by companies such as 6 River Systems Inc., owned by e-commerce technology company Shopify Inc., Locus Robotics Corp. and Fetch Robotics Inc. "We're anticipating exponential growth (in the market) over the next five years," said Dwight Klappich, a supply chain research vice president and fellow at tech research firm Gartner Inc.

[Jun 06, 2021] McDonald's Tests AI-Powered Automated Drive-Thrus At 10 Chicago Restaurants

Jun 06, 2021 | www.zerohedge.com

As fast-food restaurants and small businesses struggle to find low-skilled workers to staff their kitchens and cash registers, America's biggest fast-food franchise is seizing the opportunity to field test a concept it has been working toward for some time: 10 McDonald's restaurants in Chicago are testing automated drive-thru ordering using new artificial intelligence software that converts voice orders for the computer.

McDonald's CEO Chris Kempczinski said Wednesday during an appearance at Alliance Bernstein's Strategic Decisions conference that the new voice-order technology is about 85% accurate and can take 80% of drive-thru orders. The company obtained the technology during its 2019 acquisition of Apprente.

Over the last decade, restaurants have been leaning more into technology to improve the customer experience and help save on labor. In 2019, under former CEO Steve Easterbrook, McDonald's went on a spending spree, snapping up restaurant tech. Now, it's commonplace to see order kiosks in most McDonald's locations. The company has also embraced Uber Eats for delivery. Elsewhere, burger-flipping robots have been introduced that can be successfully operated for just $3/hour ( though "Flippy" had a minor setback after its first day in use ).

me title=

https://imasdk.googleapis.com/js/core/bridge3.463.0_en.html#goog_2014114883

me scrolling=

The concept of automation is currently being used, in some places, as a gimmick. And with the dangers that COVID-19 can pose to staff (who can then turn around and sue), we suspect more "fully automated" bars will pop up across the US.

One upscale bistro in Portland has even employed Robo-waiters to help with contactless ordering and food delivery.

The introduction of automation and artificial intelligence into the industry will eventually result in entire restaurants controlled without humans - that could happen as early as the end of this decade. As for McDonald's, Kempczinski said the technology will likely take more than one or two years to implement.

"Now there's a big leap from going to 10 restaurants in Chicago to 14,000 restaurants across the US, with an infinite number of promo permutations, menu permutations, dialect permutations, weather -- and on and on and on, " he said.

McDonald's is also exploring automation of its kitchens, but that technology likely won't be ready for another five years or so - even though it's capable of being introduced soooner.

McDonald's has also been looking into automating more of the kitchen, such as its fryers and grills, Kempczinski said. He added, however, that that technology likely won't roll out within the next five years, even though it's possible now.

"The level of investment that would be required, the cost of investment, we're nowhere near to what the breakeven would need to be from the labor cost standpoint to make that a good business decision for franchisees to do," Kempczinski said.

And because restaurant technology is moving so fast, Kempczinski said, McDonald's won't always be able to drive innovation itself or even keep up. The company's current strategy is to wait until there are opportunities that specifically work for it.

"If we do acquisitions, it will be for a short period of time, bring it in house, jumpstart it, turbo it and then spin it back out and find a partner that will work and scale it for us," he said.

On Friday, Americans will receive their first broad-based update on non-farm employment in the US since last month's report, which missed expectations by a wide margin, sparking discussion about whether all these "enhanced" monetary benefits from federal stimulus programs have kept workers from returning to the labor market.

[Jun 06, 2021] What is the difference between DNF and YUM

Jun 04, 2021 | www.2daygeek.com

Yum Package Manager has been replaced by DNF Package Manager since many long-standing issues in Yum remain unresolved.

These problems include poor performance, excessive memory usage, slowdown for dependency resolution.

DNF uses "libsolv" for dependency resolution, developed and maintained by SUSE to improve performance.

It was written mostly in python, and it has its own way of coping with dependency resolution.

Its API is not fully documented, and its extension system only allows Python plugins.

Yum is a front-end tool for rpm that manages dependencies and repositories and then uses RPM to install, download and remove packages.

Both are used to manage packages on the rpm-based system (such as Red Hat, CentOS and Fedora), including installation, upgrade, search and remove.

Why would they want to build a new tool instead of fixing existing problems?

Ales Kozamblak explained that fixing issues was not technically feasible and that the yum team was not ready to accept the changes immediately.

Also, the big challenge is that there are 56K lines for yum, but only 29K lines for DNF, so there is no way to fix it, except the fork.

However yum was working fine and it was a default package management tool until RHEL/CentOS 7.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7730570141079407&output=html&h=280&slotname=3265433976&adk=1955548976&adf=1697281978&pi=t.ma~as.3265433976&w=336&lmt=1622856789&psa=1&format=336x280&url=https%3A%2F%2Fwww.2daygeek.com%2Fcomparison-difference-between-dnf-vs-yum%2F&flash=0&wgl=1&uach=WyJXaW5kb3dzIiwiMTAuMCIsIng4NiIsIiIsIjkxLjAuODY0LjM3IixbXV0.&dt=1622856789361&bpp=1&bdt=204&idt=252&shv=r20210601&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D978f749437e7819d-22708b627e7a0028%3AT%3D1621992981%3AS%3DALNI_MY13eSRitp4LepWeDwTwXyjwdeREg&prev_fmts=728x90%2C300x600%2C336x280&correlator=2562027311128&frm=20&pv=1&ga_vid=1530848990.1621992982&ga_sid=1622856790&ga_hid=1239875811&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_nplug=3&u_nmime=4&adx=392&ady=1385&biw=1903&bih=937&scr_x=0&scr_y=0&eid=182982000%2C182982200%2C44740386&oid=3&pvsid=969688249338081&pem=410&wsm=1&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=640&brdim=1920%2C0%2C1920%2C0%2C1920%2C0%2C1920%2C1040%2C1920%2C937&vis=1&rsz=%7C%7CleEbr%7C&abl=CS&pfx=0&fu=0&bc=31&ifi=3&uci=a!3&btvi=1&fsb=1&xpc=cMOTcF5gZU&p=https%3A//www.2daygeek.com&dtd=255

me marginwidth=

S.No DNF (Dandified YUM) YUM (Yellowdog Updater, Modified)
1 DNF uses "libsolv' for dependency resolution, developed and maintained by SUSE. YUM uses the public API for dependency resolution
2 API is fully documented API is not fully documented
3 It is written in C, C++, Python It is written only in Python
4 DNF is currently used in Fedora, Red Hat Enterprise Linux 8 (RHEL), CentOS 8, OEL 8 and Mageia 6/7. YUM is currently used in Red Hat Enterprise Linux 6/7 (RHEL), CentOS 6/7, OEL 6/7.
5 DNf supports various extensions Yum supports only Python-based extension
6 The API is well documented so it's easy to create new features It is very difficult to create new features because the API is not properly documented.
7 The DNF uses less memory when synchronizing the metadata of the repositories. The YUM uses excessive memory when synchronizing the metadata of the repositories.
8 DNF uses a satisfiability algorithm to solve dependency resolution (It's using a dictionary approach to store and retrieve package and dependency information). Yum dependency resolution gets sluggish due to public API.
9 All performance is good in terms of memory usage and dependency resolution of repository metadata. Overall performance is poor in terms of many factors.
10 DNF Update: If a package contains irrelevant dependencies during a DNF update process, the package will not be updated. YUM will update a package without verifying this.
S.No DNF (Dandified YUM) YUM (Yellowdog Updater, Modified)
11 If the enabled repository does not respond, dnf will skip it and continue the transaction with the available repositories. If a repository is not available, YUM will stop immediately.
12 dnf update and dnf upgrade are equals. It's different in yum
13 The dependencies on package installation are not updated Yum offered an option for this behavior
14 Clean-up Package Removal: When removing a package, dnf automatically removes any dependency packages not explicitly installed by the user. Yum didn't do this
15 Repo Cache Update Schedule: By default, ten minutes after the system boots, updates to configured repositories are checked by dnf hourly. This action is controlled by the system timer unit named "/usr/lib/systemd/system/dnf-makecache.timer". Yum do this too.
16 Kernel packages are not protected by dnf. Unlike Yum, you can delete all kernel packages, including one that runs. Yum will not allow you to remove the running kernel
17 libsolv: for solving packages and reading repositories.

hawkey: hawkey, library providing simplified C and Python API to libsolv.

librepo: library providing C and Python (libcURL like) API for downloading linux repository metadata and packages.

libcomps: Libcomps is alternative for yum.comps library. It's written in pure C as library and there's bindings for python2 and python3

Yum does not use separate libraries to perform this function.
18 DNF contains 29k lines of code Yum contains 56k lines of code
19 DNF was developed by Ales Kozumplik YUM was developed by Zdenek Pavlas, Jan Silhan and team members
Closing Notes

In this guide, we have shown you several differences between DNF and YUM.

If you have any questions or feedback, feel free to comment below.

[Jun 06, 2021] Stack Overflow sold to tech investor Prosus for $1.8 billion - Ars Technica

Jun 05, 2021 | arstechnica.com

Stack Overflow co-founder Joel Spolsky blogged about the purchase, and Stack Overflow CEO Prasanth Chandrasekar wrote a more official announcement . Both blog posts characterize the acquisition as having little to no impact on the day-to-day operation of Stack Overflow.

"How you use our site and our products will not change in the coming weeks or months, just as our company's goals and strategic priorities remain the same," Chandrasekar said.

Spolsky went into more detail, saying that Stack Overflow will "continue to operate independently, with the exact same team in place that has been operating it, according to the exact same plan and the exact same business practices. Don't expect to see major changes or awkward 'synergies'... the entire company is staying in place: we just have different owners now."

DaveSimmons </> , 2021-06-03T17:56:12+00:00" rel="nofollow Roland the Gunslinger wrote:

Lot of people here seem to know an awful lot about a company they only just learnt about from this article, funny that.

We don't know Prosus but we have the experience of dozens of other acquisitions made with the statement that "nothing will change" ... until it always does.

At least it wasn't acquired by Google so they could turn it into a chat program and then shut it down.

[Jun 02, 2021] Linux and the Unix Philosophy by Gancarz, Mike

Jun 02, 2021 | www.amazon.com


Yong Zhi

Everyone is on a learning curve

4.0 out of 5 stars Everyone is on a learning curve Reviewed in the United States on February 3, 2009 The author was a programmer before, so in writing this book, he draw both from his personal experience and his observation to depict the software world.

I think this is more of a practice and opinion book rather than "Philosophy" book, however I have to agree with him in most cases.

For example, here is Mike Gancarz's line of thinking:

1. Hard to get the s/w design right at the first place, no matter who.
2. So it's better to write a short specs without considering all factors first.
3. Build a prototype to test the assumptions
4. Use an iterative test/rewrite process until you get it right
5. Conclusion: Unix evolved from a prototype.

In case you are curious, here are the 9 tenets of Unix/Linux:

1. Small is beautiful.
2. Make each program do one thing well.
3. Build a prototype as soon as possible.
4. Choose portability over efficiency.
5. Store data in flat text files.
6. Use software leverage to your advantage.
7. Use shell scripts to increase leverage and portability.
8. Avoid captive user interfaces.
9. Make every program a filter.

Mike Gancarz told a story like this when he argues "Good programmers write good code; great programmers borrow good code".

"I recall a less-than-top-notch software engineer who couldn't program his way out of a paper bag. He had a knack, however, for knitting lots of little modules together. He hardly ever wrote any of them himself, though. He would just fish around in the system's directories and source code repositories all day long, sniffing for routines he could string together to make a complete program. Heaven forbid that he should have to write any code. Oddly enough, it wasn't long before management recognized him as an outstanding software engineer, someone who could deliver projects on time and within budget. Most of his peers never realized that he had difficulty writing even a rudimentary sort routine. Nevertheless, he became enormously successful by simply using whatever resources were available to him."

If this is not clear enough, Mike also drew analogies between Mick Jagger and Keith Richards and Elvis. The book is full of inspiring stories to reveal software engineers' tendencies and to correct their mindsets.

[Jun 02, 2021] The Poetterisation of GNU-Linux

10, 2013 | www.slated.org

I've found a disturbing trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes Ubuntu Unity , Gnome Shell , KDE 4 , the /usr partition , SELinux , PolicyKit , Systemd , udev and PulseAudio , to name a few.

I hereby dub this phenomenon the " Poetterisation of GNU/Linux ".

The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.

Clearly this situation is untenable.

The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers, or at least more of the existing developers to adopt the right methods.

This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users, with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a large influx of users who were, by and large, purely consumers, not contributors.

As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated, consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their belligerently hostile attitudes toward the GPL, FSF, GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy predilection for proprietary software, meaningless aesthetics and hype.

Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is only too happy to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous monolithic blob , and the latter of whom successfully managed to undermine 40 years of UNIX security in a single stroke, by obliterating the principle that unprivileged users should not be allowed to install software system-wide.

GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations, misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.

Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently set about stuffing GNU/Linux with as much bloated, patent encumbered garbage as they could muster.

I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight, and we were largely successful. Some have even suggested that my diatribes , articles and debates (with Miguel de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.

Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for something that's free (in the commercial sense).

Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform, for example).

What GNU/Linux needs is a small number of competent developers producing software to a high technical standard, who respect the well-established UNIX principles of security , efficiency , code correctness , logical semantics , structured programming , modularity , flexibility and engineering simplicity (a.k.a. the KISS Principle ), just as any scientist or engineer in the field of computer science and software engineering should .

What it doesn't need is people who shrug their shoulders and bleat " disks are cheap ".

[Jun 02, 2021] The Basics of the Unix Philosophy - programming

Jun 02, 2021 | www.reddit.com

Gotebe 3 years ago

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

By now, and to be frank in the last 30 years too, this is complete and utter bollocks. Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from formatting to "side" features, all half-assed and barely, if at all, consistent.

Nothing can resist feature creep. not_perfect_yet 3 years ago

It's still a good idea. It's become very rare though. Many problems we have today are a result of not following it.

name_censored_ 3 years ago
· edited 3 years ago Gold

By now, and to be frank in the last 30 years too, this is complete and utter bollocks.

There is not one single other idea in computing that is as unbastardised as the unix philosophy - given that it's been around fifty years. Heck, Microsoft only just developed PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what is.

In that same time, we've vacillated between thick and thin computing (mainframes, thin clients, PCs, cloud). We've rebelled against at least four major schools of program design thought (structured, procedural, symbolic, dynamic). We've had three different database revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins ('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros ('10s "startup culture").

It's a small miracle that iproute2 only has formatting options and grep only has --color . If they feature-crept anywhere near the same pace as the rest of the computing world, they would probably be a RESTful SaaS microservice with ML-powered autosuggestions.

badsectoracula 3 years ago

This is because adding a new features is actually easier than trying to figure out how to do it the Unix way - often you already have the data structures in memory and the functions to manipulate them at hand, so adding a --frob parameter that does something special with that feels trivial.

GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he didn't care about it) while becoming the most available set of tools for Unix systems didn't help either.


level 2

ILikeBumblebees 3 years ago
· edited 3 years ago

Feature creep is everywhere

No, it certainly isn't. There are tons of well-designed, single-purpose tools available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps, well, that's your prerogative, and I don't begrudge you it, but just because you're not aware of alternatives doesn't mean they don't exist.

typical shell tools are choke-full of spurious additions,

What does "feature creep" even mean with respect to shell tools? If they have lots of features, but each function is well-defined and invoked separately, and still conforms to conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is BusyBox bloatware because it has lots of discrete shell tools bundled into a single binary? nirreskeya 3 years ago

Zawinski's Law :) 1 Share Report Save

icantthinkofone -34 points· 3 years ago
More than 1 child
waivek 3 years ago

The (anti) foreword by Dennis Ritchie -

I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below.

Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome.

Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost compatible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred.

Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining.

Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.

Bon appetit!

[Jun 01, 2021] "ls' command by Last Modified Date and Time
  • 15 Interview Questions on Linux "ls" Command "" Part 1
  • 10 Useful "ls' Command Interview Questions "" Part 2
  • 7 Quirky ls Command Tricks

    hardware.slashdot.org

    To list the contents of a directory with times using style, we need to choose any of the below two methods.

    # ls -l ""time-style=[STYLE]               (Method A)
    

    Note "" The above switch ( --time style must be run with switch -l , else it won't serve the purpose).

    # ls ""full-time                           (Method B)
    

    Replace [STYLE] with any of the below option.

    full-iso
    long-iso
    iso
    locale
    +%H:%M:%S:%D
    

    Note "" In the above line H(Hour), M(Minute), S(Second), D(Date) can be used in any order.

    https://78db2514796c335041f1cda1a8935134.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html#xpc=sf-gdn-exp-4&p=https%3A//www.tecmint.com

    Moreover you just choose those relevant and not all options. E.g., ls -l --time-style=+%H will show only hour.

    ls -l --time-style=+%H:%M:%D will show Hour, Minute and date.

    # ls -l --time-style=full-iso
    
    ls Command Full Time Style
    # ls -l --time-style=long-iso
    
    Long Time Style Listing
    # ls -l --time-style=iso
    
    Time Style Listing
    # ls -l --time-style=locale
    
    Locale Time Style Listing
    # ls -l --time-style=+%H:%M:%S:%D
    
    Date and Time Style Listing
    # ls --full-time
    
    Full Style Time Listing
    2. Output the contents of a directory in various formats such as separated by commas, horizontal, long, vertical, across, etc.

    Contents of directory can be listed using ls command in various format as suggested below.

    1. across
    2. comma
    3. horizontal
    4. long
    5. single-column
    6. verbose
    7. vertical
    # ls ""-format=across
    # ls --format=comma
    # ls --format=horizontal
    # ls --format=long
    # ls --format=single-column
    # ls --format=verbose
    # ls --format=vertical
    
    Listing Formats of ls Command
    3. Use ls command to append indicators like (/=@|) in output to the contents of the directory.

    The option -p with " ls " command will server the purpose. It will append one of the above indicator, based upon the type of file.

    # ls -p
    
    Append Indicators to Content
    4. Sort the contents of directory on the basis of extension, size, time and version.

    We can use options like --extension to sort the output by extension, size by extension --size , time by using extension -t and version using extension -v .

    Also we can use option --none which will output in general way without any sorting in actual.

    # ls --sort=extension
    # ls --sort=size
    # ls --sort=time
    # ls --sort=version
    # ls --sort=none
    
    Sort Listing of Content by Options
    5. Print numeric UID and GID for every contents of a directory using ls command.

    The above scenario can be achieved using flag -n (Numeric-uid-gid) along with ls command.

    # ls -n
    
    Print Listing of Content by UID and GID
    6. Print the contents of a directory on standard output in more columns than specified by default.

    Well ls command output the contents of a directory according to the size of the screen automatically.

    We can however manually assign the value of screen width and control number of columns appearing. It can be done using switch " --width ".

    # ls --width 80
    # ls --width 100
    # ls --width 150
    
    List Content Based on Window Sizes

    Note : You can experiment what value you should pass with width flag.

    7. Include manual tab size at the contents of directory listed by ls command instead of default 8.
    # ls --tabsize=[value]
    
    List Content by Table Size

    Note : Specify the [Value]= Numeric value.


    [Jun 01, 2021] How To Waste Hundreds of Millions on Your IT Transformation

    May 30, 2021 | zwischenzugs.com

    Declare a major technology transformation!

    Why? Wall Street will love it. They love macho "transformations'. By sheer executive fiat Things Will Change, for sure.

    Throw in "technology' and it makes Wall Street puff up that little bit more.

    The fact that virtually no analyst or serious buyer of stocks has the first idea of what's involved in such a transformation is irrelevant. They will lap it up.

    This is how capitalism works, and it indisputably results in the most efficient allocation of resources possible.

    A Dash of Layoffs, a Sprinkling of Talent

    These analysts and buyers will assume there will be reductions to employee headcount sooner rather than later, which of course will make the transformation go faster and beat a quick path to profit.

    Hires of top "industry experts' who know the magic needed to get all this done, and who will be able to pass on their wisdom without friction to the eager staff that remain, will make this a sure thing.

    In the end, of course, you don't want to come out of this looking too bad, do you?

    So how best to minimise any fallout from this endeavour?

    Leadership

    The first thing you should do is sort out the leadership of this transformation.

    Hire in a senior executive specifically for the purpose of making this transformation happen.

    Well, taking responsibility for it, at least. This will be useful later when you need a scapegoat for failure.

    Ideally it will be someone with a long resume of similar transformational senior roles at different global enterprises.

    Don't be concerned with whether those previous roles actually resulted in any lasting change or business success; that's not the point. The point is that they have a lot of experience with this kind of role, and will know how to be the patsy. Or you can get someone that has Dunning-Kruger syndrome so they can truly inhabit the role.

    The kind of leader you want.

    Make sure this executive is adept at managing his (also hired-in) subordinates in a divide-and-conquer way, so their aims are never aligned, or multiply-aligned in diverse directions in a 4-dimensional ball of wool.

    Incentivise senior leadership to grow their teams rather than fulfil the overall goal of the program (ideally, the overall goal will never be clearly stated by anyone "" see Strategy, below).

    Change your CIO halfway through the transformation. The resulting confusion and political changes of direction will ensure millions are lost as both teams and leadership chop and change positions.

    With a bit of luck, there'll be so little direction that the core business can be unaffected.

    Strategy

    This second one is easy enough. Don't have a strategy. Then you can chop and change plans as you go without any kind of overall direction, ensuring (along with the leadership anarchy above) that nothing will ever get done.

    Unfortunately, the world is not sympathetic to this reality, so you will have to pretend to have a strategy, at the very least. Make the core PowerPoint really dense and opaque. Include as many buzzwords as possible "" if enough are included people will assume you know what you are doing. It helps if the buzzwords directly contradict the content of the strategy documents.

    It's also essential that the strategy makes no mention of the "customer', or whatever provides Vandelay's revenue, or why the changes proposed make any difference to the business at all. That will help nicely reduce any sense of urgency to the whole process.

    Try to make any stated strategy:

    Whatever strategy you pretend to pursue, be sure to make it "Go big, go early', so you can waste as much money as fast as possible. Don't waste precious time learning about how change can get done in your context. Remember, this needs to fail once you're gone.

    Technology Architecture

    First, set up a completely greenfield "Transformation Team' separate from your existing staff. Then, task them with solving every possible problem in your business at once. Throw in some that don't exist yet too, if you like! Force them to coordinate tightly with every other team and fulfil all their wishes.

    Ensure your security and control functions are separated from (and, ideally, in some kind of war with) a Transformation Team that is siloed as far as possible from the mainstream of the business. This will create the perfect environment for expensive white elephants to be built that no-one will use.

    All this taken together will ensure that the Transformation Team's plans have as little chance of getting to production as possible. Don't give security and control functions any responsibility or reward for delivery, just reward them for blocking change.

    Ignore the "decagon of despair'. These things are nothing to do with Transformation, they are just blockers people like to talk about. The official line is that hiring Talent (see below) will take care of those. It's easy to exploit an organisation's insecurity about its capabilities to downplay the importance of these

    The decagon of despair.

    [May 30, 2021] Boston Dynamics Debuts Robot Aimed at Rising Warehouse Automation by Sara Castellanos

    May 30, 2021 | www.wsj.com

    Boston Dynamics, a robotics company known for its four-legged robot "dog," this week announced a new product, a computer-vision enabled mobile warehouse robot named "Stretch."

    Developed in response to growing demand for automation in warehouses, the robot can reach up to 10 feet inside of a truck to pick up and unload boxes up to 50 pounds each. The robot has a mobile base that can maneuver in any direction and navigate obstacles and ramps, as well as a robotic arm and a gripper. The company estimates that there are more than 500 billion boxes annually that get shipped around the world, and many of those are currently moved manually.

    "It's a pretty arduous job, so the idea with Stretch is that it does the manual labor part of that job," said Robert Playter, chief executive of the Waltham, Mass.-based company.

    The pandemic has accelerated [automation of] e-commerce and logistics operations even more over the past year, he said.

    ... ... ...

    ... the robot was made to recognize boxes of different sizes, textures and colors. For example, it can recognize both shrink-wrapped cases and cardboard boxes.

    Eventually, Stretch could move through an aisle of a warehouse, picking up different products and placing them on a pallet, Mr. Playter said.

    ... ... ...

    [May 28, 2021] Linux lsof Command Tutorial for Beginners (15 Examples) by Himanshu Arora

    Images removed. See the original for the full text
    May 23, 2021 | www.howtoforge.com

    How to list all open files

    To list all open files, run the lsof command without any arguments:

    lsof
    

    For example, Here is the screengrab of a part of the output the above command produced on my system:

    The first column represents the process while the last column contains the file name. For details on all the columns, head to the command's man page .

    2. How to list files opened by processes belonging to a specific user

    The tool also allows you to list files opened by processes belonging to a specific user. This feature can be accessed by using the -u command-line option.

    lsof -u [user-name]

    For example:

    lsof -u administrator
    3. How to list files based on their Internet address

    The tool lets you list files based on their Internet address. This can be done using the -i command-line option. For example, if you want, you can have IPv4 and IPv6 files displayed separately. For IPv4, run the following command:

    lsof -i 4

    ...

    4. How to list all files by application name

    The -c command-line option allows you to get all files opened by program name.

    $ lsof -c apache

    You do not have to use the full program name as all programs that start with the word 'apache' are shown. So in our case, it will list all processes of the 'apache2' application.

    The -c option is basically just a shortcut for the two commands:

    $ lsof | grep apache
    
    5. How to list files specific to a process

    The tool also lets you display opened files based on process identification (PID) numbers. This can be done by using the -p command-line option.

    lsof -p [PID]
    

    For example:

    lsof -p 856

    Moving on, you can also exclude specific PIDs in the output by adding the ^ symbol before them. To exclude a specific PID, you can run the following command:

    lsof -p [^PID]

    For example:

    lsof -p ^1

    As you can see in the above screenshot, the process with id 1 is excluded from the list.

    6. How to list IDs of processes that have opened a particular file

    The tool allows you to list IDs of processes that have opened a particular file. This can be done by using the -t command line option.

    $ lsof -t [file-name]

    For example:

    $ lsof -t /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.9.0
    7. How to list all open files in a directory

    If you want, you can also make lsof search for all open instances of a directory (including all the files and directories it contains). This feature can be accessed using the +D command-line option.

    $ lsof +D [directory-path]

    For example:

    $ lsof +D /usr/lib/locale
    8. How to list all Internet and x.25 (HP-UX) network files

    This is possible by using the -i command-line option we described earlier. Just that you have to use it without any arguments.

    $ lsof -i
    9. Find out which program is using a port

    The -i switch of the command allows you to find a process or application which listens to a specific port number. In the example below, I checked which program is using port 80.

    $ lsof -i :80
    

    Instead of the port number, you can use the service name as listed in the /etc/services file. Example to check which app listens on the HTTPS (443) port:

    $ lsof -i :https
    

    ... ... ...

    The above examples will check both TCP and UDP. If you like to check for TCP or UDP only, prepend the word 'tcp' or 'udp'. For example, which application is using port 25 TCP:

    $ lsof -i tcp:25

    or which app uses UDP port 53:

    $ lsof -i udp:53
    10. How to list open files based on port range

    The utility also allows you to list open files based on a specific port or port range. For example, to display open files for port 1-1024, use the following command:

    $ lsof -i :1-1024
    11. How to list open files based on the type of connection (TCP or UDP)

    The tool allows you to list files based on the type of connection. For example, for UDP specific files, use the following command:

    $ lsof -i udp

    Similarly, you can make lsof display TCP-specific files.

    12. How to make lsof list Parent PID of processes

    There's also an option that forces lsof to list the Parent Process IDentification (PPID) number in the output. The option in question is -R .

    $ lsof -R
    

    To get PPID info for a specific PID, you can run the following command:

    $ lsof -p [PID] -R
    

    For example:

    $ lsof -p 3 -R
    13. How to find network activity by user

    By using a combination of the -i and -u command-line options, we can search for all network connections of a Linux user. This can be helpful if you inspect a system that might have been hacked. In this example, we check all network activity of the user www-data:

    $ lsof -a -i -u www-data
    14. List all memory-mapped files

    This command lists all memory-mapped files on Linux.

    $ lsof -d mem
    15. List all NFS files

    The -N option shows you a list of all NFS (Network File System) files.

    $lsof -N
    
    Conclusion

    Although lsof offers a plethora of options, the ones we've discussed here should be enough to get you started. Once you're done practicing with these, head to the tool's man page to learn more about it. Oh, and in case you have any doubts and queries, drop in a comment below.

    Himanshu Arora has been working on Linux since 2007. He carries professional experience in system level programming, networking protocols, and command line. In addition to HowtoForge, Himanshu's work has also been featured in some of world's other leading publications including Computerworld, IBM DeveloperWorks, and Linux Journal.

    By: ShabbyCat at: 2020-05-31 23:47:44 Reply

    Great article! Another useful one is "lsof -i tcp:PORT_NUMBER" to list processes happening on a specific port, useful for node.js when you need to kill a process.

    Ex: lsof -i tcp:3000

    then say you want to kill the process 5393 (PID) running on port 3000, you would run "kill -9 5393"

    [May 28, 2021] Top Hex Editors for Linux

    Images removed. See the original for the full text
    May 23, 2021 | www.tecmint.com

    Xxd Hex Editor

    Most (if not every) Linux distributions come with an editor that allows you to perform hexadecimal and binary manipulation. One of those tools is the command-line tool "" xxd , which is most commonly used to make a hex dump of a given file or standard input. It can also convert a hex dump back to its original binary form.

    Hexedit Hex Editor

    Hexedit is another hexadecimal command-line editor that might already be preinstalled on your OS.

    Hexedit shows both the hexadecimal and ASCII view of the file at the same time.

    [May 28, 2021] 10 Amazing and Mysterious Uses of (!) Symbol or Operator in Linux Commands

    Images removed. See the original for the full text.
    Notable quotes:
    "... You might also mention !? It finds the last command with its' string argument. For example, if" ..."
    "... I didn't see a mention of historical context in the article, so I'll give some here in the comments. This form of history command substitution originated with the C Shell (csh), created by Bill Joy for the BSD flavor of UNIX back in the late 70's. It was later carried into tcsh, and bash (Bourne-Again SHell). ..."
    linuxiac.com
    The The '!' symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though I have not checked but a major of these won't run in other shell. Here we go into the amazing and mysterious uses of '!' symbol or operator in Linux commands.

    4. How to handle two or more arguments using (!)

    Let's say I created a text file 1.txt on the Desktop.

    $ touch /home/avi/Desktop/1.txt
    

    and then copy it to " /home/avi/Downloads " using complete path on either side with cp command.

    $ cp /home/avi/Desktop/1.txt /home/avi/downloads
    

    Now we have passed two arguments with cp command. First is " /home/avi/Desktop/1.txt " and second is " /home/avi/Downloads ", lets handle them differently, just execute echo [arguments] to print both arguments differently.

    $ echo "1st Argument is : !^"
    $ echo "2nd Argument is : !cp:2"
    

    Note 1st argument can be printed as "!^" and rest of the arguments can be printed by executing "![Name_of_Command]:[Number_of_argument]" .

    In the above example the first command was " cp " and 2nd argument was needed to print. Hence "!cp:2" , if any command say xyz is run with 5 arguments and you need to get 4th argument, you may use "!xyz:4" , and use it as you like. All the arguments can be accessed by "!*" .

    5. Execute last command on the basis of keywords

    We can execute the last executed command on the basis of keywords. We can understand it as follows:

    $ ls /home > /dev/null						[Command 1]
    $ ls -l /home/avi/Desktop > /dev/null		                [Command 2]	
    $ ls -la /home/avi/Downloads > /dev/null	                [Command 3]
    $ ls -lA /usr/bin > /dev/null				        [Command 4]
    

    Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of each command to " /dev/null " as we are not going to deal with the output of the command also the console remains clean.

    Now Execute last run command on the basis of keywords.

    $ ! ls					[Command 1]
    $ ! ls -l				[Command 2]	
    $ ! ls -la				[Command 3]
    $ ! ls -lA				[Command 4]
    

    Check the output and you will be astonished that you are running already executed commands just by ls keywords.

    Run Commands Based on Keywords

    6. The power of !! Operator

    You can run/alter your last run command using (!!) . It will call the last run command with alter/tweak in the current command. Lets show you the scenario

    Last day I run a one-liner script to get my private IP so I run,

    $ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/

    Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt , so what should I do? Should I retype the whole command again and redirect the output to a file? Well an easy solution is to use UP navigation key and add '> ip.txt' to redirect the output to a file as.

    $ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/ > ip.txt

    Thanks to the life Savior UP navigation key here. Now consider the below condition, the next time I run below one-liner script.

    $ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

    As soon as I run script, the bash prompt returned an error with the message "bash: ifconfig: command not found" , It was not difficult for me to guess I run this command as user where it should be run as root.

    So what's the solution? It is difficult to login to root and then type the whole command again! Also ( UP Navigation Key ) in last example didn't came to rescue here. So? We need to call "!!" without quotes, which will call the last command for that user.

    $ su -c "!!" root
    

    Here su is switch user which is root, -c is to run the specific command as the user and the most important part !! will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.

    I make use of !! mostly in following scenarios,

    1. When I run apt-get command as normal user, I usually get an error saying you don't have permission to execute.

    $ apt-get upgrade && apt-get dist-upgrade
    

    Opps error"don't worry execute below command to get it successful..

    $ su -c !!
    

    Same way I do for,

    $ service apache2 start
    or
    $ /etc/init.d/apache2 start
    or
    $ systemctl start apache2
    

    OOPS User not authorized to carry such task, so I run..

    $ su -c 'service apache2 start'
    or
    $ su -c '/etc/init.d/apache2 start'
    or
    $ su -c 'systemctl start apache2'
    
    7. Run a command that affects all the file except ![FILE_NAME]

    The ! ( Logical NOT ) can be used to run the command on all the files/extension except that is behind '!' .

    A. Remove all the files from a directory except the one the name of which is 2.txt .

    $ rm !(2.txt)
    

    B. Remove all the file type from the folder except the one the extension of which is " pdf ".

    $ $ rm !(*.pdf)
    

    ... ... ...

    [May 28, 2021] How to synchronize time with NTP using systemd-timesyncd daemon

    May 16, 2021 | linuxiac.com

    The majority of Linux distributions have adopted systemd, and with it comes the systemd-timesyncd daemon. That means you have an NTP client already preinstalled, and there is no need to run the full-fledged ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job just fine.

    To check the current status of time and time configuration via timedatectl and timesyncd, run the following command.

    timedatectl status
                   Local time: Thu 2021-05-13 15:44:11 UTC
               Universal time: Thu 2021-05-13 15:44:11 UTC
                     RTC time: Thu 2021-05-13 15:44:10
                    Time zone: Etc/UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: active
              RTC in local TZ: no
    

    If you see NTP service: active in the output, then your computer clock is automatically periodically adjusted through NTP.

    If you see NTP service: inactive , run the following command to enable NTP time synchronization.

    timedatectl set-ntp true
    

    That's all you have to do. Once that's done, everything should be in place and time should be kept correctly.

    In addition, timesyncd itself is still a normal service, so you can check its status also more in detail via.

    systemctl status systemd-timesyncd
    
    systemd-timesyncd.service - Network Time Synchronization
          Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) 
          Active: active (running) since Thu 2021-05-13 18:55:18 EEST; 3min 23s ago
          ...
    

    If it is disabled, you can start and make systemd-timesyncd service active like this:

    systemctl start systemd-timesyncd
    systemctl enable systemd-timesyncd
    
    How to change timezone

    Before changing your time zone, start by using timedatectl to find out the currently set time zone.

    timedatectl
    
                   Local time: Thu 2021-05-13 16:59:32 UTC
               Universal time: Thu 2021-05-13 16:59:32 UTC
                     RTC time: Thu 2021-05-13 16:59:31
                    Time zone: Etc/UTC (UTC, +0000)
    System clock synchronized: yes
                  NTP service: inactive
              RTC in local TZ: no

    Now let's list all the available time zones, so you know the exact name of the time zone you'll use on your system.

    timedatectl list-timezones
    

    The list of time zones is quite large. You do need to know the official time-zone name for your location. Say you want to change the time zone to New York.

    timedatectl set-timezone America/New_York
    

    This command creates a symbolic link for the time zone you choose from /usr/share/zoneinfo/ to /etc/localtime .

    In addition, you can skip the command shown above, create this symbolic link manually and achieve the same result.

    ln -s /usr/share/zoneinfo/America/New_York /etc/localtime
    

    [May 28, 2021] LFCA- Learn Cloud Costs and Budgeting: Part 16

    May 11, 2021 | www.tecmint.com

    Cloud pricing costs can be quite obscure especially for users who have not spent significant time understanding the cost that each cloud service attracts.

    Pricing models from major Cloud providers such as AWS and Microsoft Azure are not as straightforward as compared to on-premise costs. You simply won't get a clear mapping of exactly what you will pay for the infrastructure.

    [May 28, 2021] Microsoft Launches personal version of Teams with free all-day video calling

    Highly recommended!
    May 16, 2021 | slashdot.org
    (theverge.com) 59

    Posted by msmash on Monday May 17, 2021 @12:02PM from the how-about-that dept. Microsoft is launching the personal version of Microsoft Teams today. After previewing the service nearly a year ago, Microsoft Teams is now available for free personal use amongst friends and families . From a report:

    The service itself is almost identical to the Microsoft Teams that businesses use, and it will allow people to chat, video call, and share calendars, locations, and files easily. Microsoft is also continuing to offer everyone free 24-hour video calls that it introduced in the preview version in November.

    You'll be able to meet up with up to 300 people in video calls that can last for 24 hours. Microsoft will eventually enforce limits of 60 minutes for group calls of up to 100 people after the pandemic, but keep 24 hours for 1:1 calls.

    While the preview initially launched on iOS and Android, Microsoft Teams for personal use now works across the web, mobile, and desktop apps. Microsoft is also allowing Teams personal users to enable its Together mode -- a feature that uses AI to segment your face and shoulders and place you together with other people in a virtual space. Skype got this same feature back in December.

    [May 28, 2021] How to Remove Lines from a File Using Sed Command

    Images removed. See the original for the full text
    May 25, 2021 | www.linuxshelltips.com

    If you have to delete the fourth line from the file then you have to substitute N=4 .

    $ sed '4d' testfile.txt
    
    Delete Line from File
    How to Delete First and Last Line from a File

    You can delete the first line from a file using the same syntax as described in the previous example. You have to put N=1 which will remove the first line.

    $ sed '1d' testfile.txt
    

    To delete the last line from a file using the below command with ($) sign that denotes the last line of a file.

    $ sed '$d' testfile.txt
    
    Delete First and Last Lines from File
    How to Delete Range of Lines from a File

    You can delete a range of lines from a file. Let's say you want to delete lines from 3 to 5, you can use the below syntax.

    $ sed 'M,Nd' testfile.txt
    

    To actually delete, use the following command to do it.

    $ sed '3,5d' testfile.txt
    
    Delete Range of Lines from-File

    You can use ! symbol to negate the delete operation. This will delete all lines except the given range(3-5).

    $ sed '3,5!d' testfile.txt
    
    Negate Operation
    How to Blank Lines from a File

    To delete all blank lines from a file run the following command. An important point to note is using this command, empty lines with spaces will not be deleted. I have added empty lines and empty lines with spaces in my test file.

    $ cat testfile.txt
    
    First line
    
    second line
    Third line
    
    Fourth line
    Fifth line
    
    Sixth line
    SIXTH LINE
    
    $ sed '/^$/d' testfile.txt
    
    Lines with Spaces Not Removed

    From the above image, you can see empty lines are deleted but lines that have spaces are not deleted. To delete all lines including spaces you can run the following command.

    $ sed '/^[[:space:]]*$/d' testfile.txt
    
    Lines with Spaces Removed
    How to Delete Lines Starting with Words in a File

    To delete a line that starts with a certain word run the following command with ^ symbol represents the start of the word followed by the actual word.

    $ sed '/^First/d' testfile.txt
    

    To delete a line that ends with a certain word run the following command. The word to be deleted followed by the $ symbol will delete lines.

    $ sed '/LINE$/d' testfile.txt
    
    Delete Line Start with Words in File
    How to Make Changes Directly into a File

    To make the changes directly in the file using sed you have to pass -i flag which will make the changes directly in the file.

    $ sed -i '/^[[:space:]]*$/d' testfile.txt
    

    We have come to the end of the article. The sed command will play a major part when you are working on manipulating any files. When combined with other Linux utilities like awk , grep you can do more things with sed .

    [May 28, 2021] Cryptocurrency Miners Are Now Abusing the Free Tiers of Cloud Platforms

    May 26, 2021 | news.slashdot.org

    (therecord.media) 73

    Posted by EditorDavid on Saturday May 22, 2021 @10:34AM from the cloud-kingdoms dept. An anonymous reader shares a report:

    Over the course of the last few months, some crypto-mining gangs have switched their modus operandi from attacking and hijacking unpatched servers to abusing the free tiers of cloud computing platforms .

    Gangs have been operating by registering accounts on selected platforms, signing up for a free tier, and running a cryptocurrency mining app on the provider's free tier infrastructure.

    After trial periods or free credits reach their limits, the groups register a new account and start from the first step, keeping the provider's servers at their upper usage limit and slowing down their normal operations...

    The list of services that have been abused this way includes the likes of GitHub, GitLab, Microsoft Azure, TravisCI, LayerCI, CircleCI, Render, CloudBees CodeShip, Sourcehut, and Okteto.

    GitLab and Sourcehut have published blog posts detailing their efforts to curtail the problem, with Sourcehut complaining cryptocurrency miners are "deliberately circumventing our abuse detection," which "exhausts our resources and leads to long build queues for normal users."

    In the article an engineer at CodeShip acknowledges "Our team has been swamped with dealing with this kind of stuff."

    [May 28, 2021] Bash scripting- Moving from backtick operator to $ parentheses

    May 20, 2021 | www.redhat.com

    You can achieve the same result by replacing the backticks with the $ parens, like in the example below:

    ⯠echo "There are $(ls | wc -l) files in this directory"
    There are 3 files in this directory
    

    Here's another example, still very simple but a little more realistic. I need to troubleshoot something in my network connections, so I decide to show my total and waiting connections minute by minute.

    ⯠cat netinfo.sh
    #!/bin/bash
    while true
    do
      ss -an > netinfo.txt
      connections_total=$(cat netinfo.txt | wc -l)
      connections_waiting=$(grep WAIT netinfo.txt | wc -l)
      printf "$(date +%R) - Total=%6d Waiting=%6d\n" $connections_total $connections_waiting
      sleep 60
    done
    
    ⯠./netinfo.sh
    22:59 - Total=  2930 Waiting=   977
    23:00 - Total=  2923 Waiting=   963
    23:01 - Total=  2346 Waiting=   397
    23:02 - Total=  2497 Waiting=   541
    

    It doesn't seem like a huge difference, right? I just had to adjust the syntax. Well, there are some implications involving the two approaches. If you are like me, who automatically uses the backticks without even blinking, keep reading.

    Deprecation and recommendations

    Deprecation sounds like a bad word, and in many cases, it might really be bad.

    When I was researching the explanations for the backtick operator, I found some discussions about "are the backtick operators deprecated?"

    The short answer is: Not in the sense of "on the verge of becoming unsupported and stop working." However, backticks should be avoided and replaced by the $ parens syntax.

    The main reasons for that are (in no particular order):

    1. Backticks operators can become messy if the internal commands also use backticks.

    2. The $ parens operator is safer and more predictable.

    Here are some examples of the behavioral differences between backticks and $ parens:

    ⯠echo '\$x'
    \$x
    
    ⯠echo `echo '\$x'`
    $x
    
    ⯠echo $(echo '\$x')
    \$x
    

    You can find additional examples of the differences between backticks and $ parens behavior here .

    [ Free cheat sheet: Get a list of Linux utilities and commands for managing servers and networks . ]

    Wrapping up

    If you compare the two approaches, it seems logical to think that you should always/only use the $ parens approach. And you might think that the backtick operators are only used by sysadmins from an older era .

    Well, that might be true, as sometimes I use things that I learned long ago, and in simple situations, my "muscle memory" just codes it for me. For those ad-hoc commands that you know that do not contain any nasty characters, you might be OK using backticks. But for anything that is more perennial or more complex/sophisticated, please go with the $ parens approach.

    [May 23, 2021] 14 Useful Examples of Linux 'sort' Command - Part 1

    Images removed. See the original for the full text
    May 23, 2021 | www.tecmint.com

    7. Sort the contents of file ' lsl.txt ' on the basis of 2nd column (which represents number of symbolic links).

    $ sort -nk2 lsl.txt
    

    Note: The ' -n ' option in the above example sort the contents numerically. Option ' -n ' must be used when we wanted to sort a file on the basis of a column which contains numerical values.

    8. Sort the contents of file ' lsl.txt ' on the basis of 9th column (which is the name of the files and folders and is non-numeric).

    $ sort -k9 lsl.txt
    

    9. It is not always essential to run sort command on a file. We can pipeline it directly on the terminal with actual command.

    $ ls -l /home/$USER | sort -nk5
    

    10. Sort and remove duplicates from the text file tecmint.txt . Check if the duplicate has been removed or not.

    $ cat tecmint.txt
    $ sort -u tecmint.txt
    

    Rules so far (what we have observed):

    1. Lines starting with numbers are preferred in the list and lies at the top until otherwise specified ( -r ).
    2. Lines starting with lowercase letters are preferred in the list and lies at the top until otherwise specified ( -r ).
    3. Contents are listed on the basis of occurrence of alphabets in dictionary until otherwise specified ( -r ).
    4. Sort command by default treat each line as string and then sort it depending upon dictionary occurrence of alphabets (Numeric preferred; see rule – 1) until otherwise specified.

    11. Create a third file ' lsla.txt ' at the current location and populate it with the output of ' ls -lA ' command.

    $ ls -lA /home/$USER > /home/$USER/Desktop/tecmint/lsla.txt
    $ cat lsla.txt
    

    Those having understanding of ' ls ' command knows that ' ls -lA'='ls -l ' + Hidden files. So most of the contents on these two files would be same.

    12. Sort the contents of two files on standard output in one go.

    $ sort lsl.txt lsla.txt
    

    Notice the repetition of files and folders.

    13. Now we can see how to sort, merge and remove duplicates from these two files.

    $ sort -u lsl.txt lsla.txt
    

    Notice that duplicates has been omitted from the output. Also, you can write the output to a new file by redirecting the output to a file.

    14. We may also sort the contents of a file or the output based upon more than one column. Sort the output of ' ls -l ' command on the basis of field 2,5 (Numeric) and 9 (Non-Numeric).

    $ ls -l /home/$USER | sort -t "," -nk2,5 -k9
    

    That's all for now. In the next article we will cover a few more examples of ' sort ' command in detail for you. Till then stay tuned and connected to Tecmint. Keep sharing. Keep commenting. Like and share us and help us get spread.

    [May 23, 2021] Adding arguments and options to your Bash scripts

    May 23, 2021 | www.redhat.com

    Handling options

    The ability for a Bash script to handle command line options such as -h to display help gives you some powerful capabilities to direct the program and modify what it does. In the case of your -h option, you want the program to print the help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command in conjunction with the getops and case commands.

    The getops command reads any and all options specified at the command line and creates a list of those options. The while command loops through the list of options by setting the variable $options for each in the code below. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to assess the list of options until they have all been processed or an exit statement is encountered, which terminates the program.

    Be sure to delete the help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this.

    ############################################################
    ############################################################
    # Main program                                             #
    ############################################################
    ############################################################
    ############################################################
    # Process the input options. Add options as needed.        #
    ############################################################
    # Get the options
    while getopts ":h" option; do
       case $option in
          h) # display Help
             Help
             exit;;
       esac
    done
    
    echo "Hello world!"
    

    Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option. Add to this case statement to delineate the end of each option.

    Testing is now a little more complex. You need to test your program with several different options -- and no options -- to see how it responds. First, check to ensure that with no options that it prints "Hello world!" as it should.

    [student@testvm1 ~]$ hello.sh
    Hello world!
    

    That works, so now test the logic that displays the help text.

    [student@testvm1 ~]$ hello.sh -h
    Add a description of the script functions here.
    
    Syntax: scriptTemplate [-g|h|t|v|V]
    options:
    g     Print the GPL license notification.
    h     Print this Help.
    v     Verbose mode.
    V     Print software version and exit.
    

    That works as expected, so now try some testing to see what happens when you enter some unexpected options.

    [student@testvm1 ~]$ hello.sh -x
    Hello world!
    
    [student@testvm1 ~]$ hello.sh -q
    Hello world!
    
    [student@testvm1 ~]$ hello.sh -lkjsahdf
    Add a description of the script functions here.
    
    Syntax: scriptTemplate [-g|h|t|v|V]
    options:
    g     Print the GPL license notification.
    h     Print this Help.
    v     Verbose mode.
    V     Print software version and exit.
    
    [student@testvm1 ~]$
    
    Handling invalid options

    The program just ignores the options for which you haven't created specific responses without generating any errors. Although in the last entry with the -lkjsahdf options, because there is an "h" in the list, the program did recognize it and print the help text. Testing has shown that one thing that is missing is the ability to handle incorrect input and terminate the program if any is detected.

    You can add another case stanza to the case statement that will match any option for which there is no explicit match. This general case will match anything you haven't provided a specific match for. The case statement now looks like this.

    while getopts ":h" option; do
       case $option in
          h) # display Help
             Help
             exit;;
         \?) # Invalid option
             echo "Error: Invalid option"
             exit;;
       esac
    done
    
    Kubernetes and OpenShift

    This bit of code deserves an explanation about how it works. It seems complex but is fairly easy to understand. The while – done structure defines a loop that executes once for each option in the getopts – option structure. The ":h" string -- which requires the quotes -- lists the possible input options that will be evaluated by the case – esac structure. Each option listed must have a corresponding stanza in the case statement. In this case, there are two. One is the h) stanza which calls the Help procedure. After the Help procedure completes, execution returns to the next program statement, exit;; which exits from the program without executing any more code even if some exists. The option processing loop is also terminated, so no additional options would be checked.

    Notice the catch-all match of \? as the last stanza in the case statement. If any options are entered that are not recognized, this stanza prints a short error message and exits from the program.

    Any additional specific cases must precede the final catch-all. I like to place the case stanzas in alphabetical order, but there will be circumstances where you want to ensure that a particular case is processed before certain other ones. The case statement is sequence sensitive, so be aware of that when you construct yours.

    The last statement of each stanza in the case construct must end with the double semicolon ( ;; ), which is used to mark the end of each stanza explicitly. This allows those programmers who like to use explicit semicolons for the end of each statement instead of implicit ones to continue to do so for each statement within each case stanza.

    Test the program again using the same options as before and see how this works now.

    The Bash script now looks like this.

    #!/bin/bash
    ############################################################
    # Help                                                     #
    ############################################################
    Help()
    {
       # Display Help
       echo "Add description of the script functions here."
       echo
       echo "Syntax: scriptTemplate [-g|h|v|V]"
       echo "options:"
       echo "g     Print the GPL license notification."
       echo "h     Print this Help."
       echo "v     Verbose mode."
       echo "V     Print software version and exit."
       echo
    }
    
    ############################################################
    ############################################################
    # Main program                                             #
    ############################################################
    ############################################################
    ############################################################
    # Process the input options. Add options as needed.        #
    ############################################################
    # Get the options
    while getopts ":h" option; do
       case $option in
          h) # display Help
             Help
             exit;;
         \?) # Invalid option
             echo "Error: Invalid option"
             exit;;
       esac
    done
    
    
    echo "hello world!"
    

    Be sure to test this version of your program very thoroughly. Use random input and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.

    Using options to enter data

    First, add a variable and initialize it. Add the two lines shown in bold in the segment of the program shown below. This initializes the $Name variable to "world" as the default.

    <snip>
    ############################################################
    ############################################################
    # Main program                                             #
    ############################################################
    ############################################################
    
    # Set variables
    Name="world"
    
    ############################################################
    # Process the input options. Add options as needed.        #
    <snip>
    

    Change the last line of the program, the echo command, to this.

    echo "hello $Name!"
    

    Add the logic to input a name in a moment but first test the program again. The result should be exactly the same as before.

    [dboth@david ~]$ hello.sh
    hello world!
    [dboth@david ~]$
    
    # Get the options
    while getopts ":hn:" option; do
       case $option in
          h) # display Help
             Help
             exit;;
          n) # Enter a name
             Name=$OPTARG;;
         \?) # Invalid option
             echo "Error: Invalid option"
             exit;;
       esac
    done
    

    $OPTARG is always the variable name used for each new option argument, no matter how many there are. You must assign the value in $OPTARG to a variable name that will be used in the rest of the program. This new stanza does not have an exit statement. This changes the program flow so that after processing all valid options in the case statement, execution moves on to the next statement after the case construct.

    Test the revised program.

    [dboth@david ~]$ hello.sh
    hello world!
    
    [dboth@david ~]$ hello.sh -n LinuxGeek46
    hello LinuxGeek46!
    
    [dboth@david ~]$ hello.sh -n "David Both"
    hello David Both!
    [dboth@david ~]$
    

    The completed program looks like this.

    #!/bin/bash
    ############################################################
    # Help                                                     #
    ############################################################
    Help()
    {
       # Display Help
       echo "Add description of the script functions here."
       echo
       echo "Syntax: scriptTemplate [-g|h|v|V]"
       echo "options:"
       echo "g     Print the GPL license notification."
       echo "h     Print this Help."
       echo "v     Verbose mode."
       echo "V     Print software version and exit."
       echo
    }
    
    ############################################################
    ############################################################
    # Main program                                             #
    ############################################################
    ############################################################
    
    # Set variables
    Name="world"
    
    ############################################################
    # Process the input options. Add options as needed.        #
    ############################################################
    # Get the options
    while getopts ":hn:" option; do
       case $option in
          h) # display Help
             Help
             exit;;
          n) # Enter a name
             Name=$OPTARG;;
         \?) # Invalid option
             echo "Error: Invalid option"
             exit;;
       esac
    done
    
    
    echo "hello $Name!"

    Be sure to test the help facility and how the program reacts to invalid input to verify that its ability to process those has not been compromised. If that all works as it should, then you have successfully learned how to use options and option arguments.

    [May 23, 2021] The Bash String Operators, by Kevin Sookocheff

    May 23, 2021 | sookocheff.com

    The Bash String Operators Posted on December 11, 2014 | 3 minutes | Kevin Sookocheff

    A common task in bash programming is to manipulate portions of a string and return the result. bash provides rich support for these manipulations via string operators. The syntax is not always intuitive so I wanted to use this blog post to serve as a permanent reminder of the operators.

    The string operators are signified with the ${} notation. The operations can be grouped in to a few classes. Each heading in this article describes a class of operation.

    Substring Extraction
    Extract from a position
    1
    
    ${string:position}
    

    Extraction returns a substring of string starting at position and ending at the end of string . string is treated as an array of characters starting at 0.

    1
    2
    3
    4
    5
    
    > string="hello world"
    > echo ${string:1}
    ello world
    > echo ${string:6}
    world
    

    Extract from a position with a length
    ${string:position:length}
    

    Adding a length returns a substring only as long as the length parameter.

    > string="hello world"
    > echo ${string:1:2}
    el
    > echo ${string:6:3}
    wor
    
    Substring Removal
    Remove shortest starting match
    ${variable#pattern}
    

    If variable starts with pattern , delete the shortest part that matches the pattern.

    > string="hello world, hello jim"
    > echo ${string#*hello}
    world, hello jim
    

    Remove longest starting match
    ${variable##pattern}

    If variable starts with pattern , delete the longest match from variable and return the rest.

    > string="hello world, hello jim"
    > echo ${string##*hello}
    jim
    

    Remove shortest ending match
    ${variable%pattern}
    

    If variable ends with pattern , delete the shortest match from the end of variable and return the rest.

    > string="hello world, hello jim"
    > echo ${string%hello*}
    hello world,
    

    Remove longest ending match
    ${variable%%pattern}
    

    If variable ends with pattern , delete the longest match from the end of variable and return the rest.

    > string="hello world, hello jim"
    > echo ${string%%hello*}
    
    
    Substring Replacement
    Replace first occurrence of word
    ${variable/pattern/string}
    

    Find the first occurrence of pattern in variable and replace it with string . If string is null, pattern is deleted from variable . If pattern starts with # , the match must occur at the beginning of variable . If pattern starts with % , the match must occur at the end of the variable .

    > string="hello world, hello jim"
    > echo ${string/hello/goodbye}
    goodbye world, hello jim
    

    Replace all occurrences of word
    ${variable//pattern/string}

    Same as above but finds all occurrences of pattern in variable and replace them with string . If string is null, pattern is deleted from variable .

    > string="hello world, hello jim"
    > echo ${string//hello/goodbye}
    goodbye world, goodbye jim
    

    See also bash

    [May 10, 2021] The Tilde Text Editor

    Highly recommended!
    This is an editor similar to FDE and can be used as external editor for MC
    May 10, 2021 | os.ghalkes.nl

    Tilde is a text editor for the console/terminal, which provides an intuitive interface for people accustomed to GUI environments such as Gnome, KDE and Windows. For example, the short-cut to copy the current selection is Control-C, and to paste the previously copied text the short-cut Control-V can be used. As another example, the File menu can be accessed by pressing Meta-F.

    However, being a terminal-based program there are limitations. Not all terminals provide sufficient information to the client programs to make Tilde behave in the most intuitive way. When this is the case, Tilde provides work-arounds which should be easy to work with.

    The main audience for Tilde is users who normally work in GUI environments, but sometimes require an editor for a console/terminal environment. This may be because the computer in question is a server which does not provide a GUI, or is accessed remotely over SSH. Tilde allows these users to edit files without having to learn a completely new interface, such as vi or Emacs do. A result of this choice is that Tilde will not provide all the fancy features that Vim or Emacs provide, but only the most used features.

    News Tilde version 1.1.2 released

    This release fixes a bug where Tilde would discard read lines before an invalid character, while requested to continue reading.

    23-May-2020

    Tilde version 1.1.1 released

    This release fixes a build failure on C++14 and later compilers

    12-Dec-2019

    [May 10, 2021] Split a String in Bash

    May 10, 2021 | www.xmodulo.com

    When you need to split a string in bash, you can use bash's built-in read command. This command reads a single line of string from stdin, and splits the string on a delimiter. The split elements are then stored in either an array or separate variables supplied with the read command. The default delimiter is whitespace characters (' ', '\t', '\r', '\n'). If you want to split a string on a custom delimiter, you can specify the delimiter in IFS variable before calling read .

    # strings to split
    var1="Harry Samantha Bart   Amy"
    var2="green:orange:black:purple"
    
    # split a string by one or more whitespaces, and store the result in an array
    read -a my_array <<< $var1
    
    # iterate the array to access individual split words
    for elem in "${my_array[@]}"; do
        echo $elem
    done
    
    echo "----------"
    # split a string by a custom delimter
    IFS=':' read -a my_array2 <<< $var2
    for elem in "${my_array2[@]}"; do
        echo $elem
    done
    
    Harry
    Samantha
    Bart
    Amy
    ----------
    green
    orange
    black
    purple
    

    [May 10, 2021] How to manipulate strings in bash

    May 10, 2021 | www.xmodulo.com

    Remove a Trailing Newline Character from a String in Bash

    If you want to remove a trailing newline or carriage return character from a string, you can use the bash's parameter expansion in the following form.

    ${string%$var}
    

    This expression implies that if the "string" contains a trailing character stored in "var", the result of the expression will become the "string" without the character. For example:

    # input string with a trailing newline character
    input_line=$'This is my example line\n'
    # define a trailing character.  For carriage return, replace it with $'\r' 
    character=$'\n'
    
    echo -e "($input_line)"
    # remove a trailing newline character
    input_line=${input_line%$character}
    echo -e "($input_line)"
    
    (This is my example line
    )
    (This is my example line)
    
    Trim Leading/Trailing Whitespaces from a String in Bash

    If you want to remove whitespaces at the beginning or at the end of a string (also known as leading/trailing whitespaces) from a string, you can use sed command.

    my_str="   This is my example string    "
    
    # original string with leading/trailing whitespaces
    echo -e "($my_str)"
    
    # trim leading whitespaces in a string
    my_str=$(echo "$my_str" | sed -e "s/^[[:space:]]*//")
    echo -e "($my_str)"
    
    # trim trailing whitespaces in a string
    my_str=$(echo "$my_str" | sed -e "s/[[:space:]]*$//")
    echo -e "($my_str)"
    
    (   This is my example string    )
    (This is my example string    )      ← leading whitespaces removed
    (This is my example string)          ← trailing whitespaces removed
    

    If you want to stick with bash's built-in mechanisms, the following bash function can get the job done.

    trim() {
        local var="$*"
        # remove leading whitespace characters
        var="${var#"${var%%[![:space:]]*}"}"
        # remove trailing whitespace characters
        var="${var%"${var##*[![:space:]]}"}"   
        echo "$var"
    }
    
    my_str="   This is my example string    "
    echo "($my_str)"
    
    my_str=$(trim $my_str)
    echo "($my_str)"
    

    [May 10, 2021] String Operators - Learning the bash Shell, Second Edition

    May 10, 2021 | www.oreilly.com

    Table 4-1. Substitution Operators

    Operator Substitution
    $ { varname :- word }

    If varname exists and isn't null, return its value; otherwise return word .

    Purpose :

    Returning a default value if the variable is undefined.

    Example :

    ${count:-0} evaluates to 0 if count is undefined.

    $ { varname := word }

    If varname exists and isn't null, return its value; otherwise set it to word and then return its value. Positional and special parameters cannot be assigned this way.

    Purpose :

    Setting a variable to a default value if it is undefined.

    Example :

    $ {count := 0} sets count to 0 if it is undefined.

    $ { varname :? message }

    If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script (non-interactive shells only). Omitting message produces the default message parameter null or not set .

    Purpose :

    Catching errors that result from variables being undefined.

    Example :

    {count :?" undefined! " } prints "count: undefined!" and exits if count is undefined.

    $ { varname : + word }

    If varname exists and isn't null, return word ; otherwise return null.

    Purpose :

    Testing for the existence of a variable.

    Example :

    $ {count :+ 1} returns 1 (which could mean "true") if count is defined.

    $ { varname : offset }
    $ { varname : offset : length }

    Performs substring expansion. a It returns the substring of $ varname starting at offset and up to length characters. The first character in $ varname is position 0. If length is omitted, the substring starts at offset and continues to the end of $ varname . If offset is less than 0 then the position is taken from the end of $ varname . If varname is @ , the length is the number of positional parameters starting at parameter offset .

    Purpose :

    Returning parts of a string (substrings or slices ).

    Example :

    If count is set to frogfootman , $ {count :4} returns footman . $ {count :4:4} returns foot .

    [ 52 ]

    Table 4-2. Pattern-Matching Operators

    Operator Meaning
    $ { variable # pattern }

    If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest.

    $ { variable ## pattern }

    If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest.

    $ { variable % pattern }

    If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest.

    $ { variable %% pattern }

    If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest.

    $ { variable / pattern / string }
    $ { variable // pattern / string }

    The longest match to pattern in variable is replaced by string . In the first form, only the first match is replaced. In the second form, all matches are replaced. If the pattern is begins with a # , it must match at the start of the variable. If it begins with a % , it must match with the end of the variable. If string is null, the matches are deleted. If variable is @ or * , the operation is applied to each positional parameter in turn and the expansion is the resultant list. a

    [May 10, 2021] Concatenating Strings with the += Operator

    May 10, 2021 | linuxize.com

    Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the += operator:

    VAR1="Hello, "
    VAR1+=" World"
    echo "$VAR1"
    
    Copy
    Hello, World
    Copy
    

    The following example is using the += operator to concatenate strings in bash for loop :

    languages.sh
    VAR=""
    for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do
      VAR+="${ELEMENT} "
    done
    
    echo "$VAR"
    
    Copy
    Hydrogen Helium Lithium Beryllium
    

    [May 10, 2021] String Operators (Korn Shell) - Daniel Han's Technical Notes

    May 10, 2021 | sites.google.com

    4.3 String Operators

    The curly-bracket syntax allows for the shell's string operators . String operators allow you to manipulate values of variables in various useful ways without having to write full-blown programs or resort to external UNIX utilities. You can do a lot with string-handling operators even if you haven't yet mastered the programming features we'll see in later chapters.

    In particular, string operators let you do the following:

    4.3.1 Syntax of String Operators

    The basic idea behind the syntax of string operators is that special characters that denote operations are inserted between the variable's name and the right curly brackets. Any argument that the operator may need is inserted to the operator's right.

    The first group of string-handling operators tests for the existence of variables and allows substitutions of default values under certain conditions. These are listed in Table 4.1 . [6]

    [6] The colon ( : ) in each of these operators is actually optional. If the colon is omitted, then change "exists and isn't null" to "exists" in each definition, i.e., the operator tests for existence only.

    Table 4.1: Substitution Operators
    Operator Substitution
    ${ varname :- word } If varname exists and isn't null, return its value; otherwise return word .
    Purpose : Returning a default value if the variable is undefined.
    Example : ${count:-0} evaluates to 0 if count is undefined.
    ${ varname := word} If varname exists and isn't null, return its value; otherwise set it to word and then return its value.[7]
    Purpose : Setting a variable to a default value if it is undefined.
    Example : $ {count:=0} sets count to 0 if it is undefined.
    ${ varname :? message } If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script. Omitting message produces the default message parameter null or not set .
    Purpose : Catching errors that result from variables being undefined.
    Example : {count :?" undefined! " } prints "count: undefined!" and exits if count is undefined.
    ${ varname :+ word } If varname exists and isn't null, return word ; otherwise return null.
    Purpose : Testing for the existence of a variable.
    Example : ${count:+1} returns 1 (which could mean "true") if count is defined.

    [7] Pascal, Modula, and Ada programmers may find it helpful to recognize the similarity of this to the assignment operators in those languages.

    The first two of these operators are ideal for setting defaults for command-line arguments in case the user omits them. We'll use the first one in our first programming task.

    Task 4.1

    You have a large album collection, and you want to write some software to keep track of it. Assume that you have a file of data on how many albums you have by each artist. Lines in the file look like this:

    14 Bach, J.S.
    1       Balachander, S.
    21      Beatles
    6       Blakey, Art
    

    Write a program that prints the N highest lines, i.e., the N artists by whom you have the most albums. The default for N should be 10. The program should take one argument for the name of the input file and an optional second argument for how many lines to print.

    By far the best approach to this type of script is to use built-in UNIX utilities, combining them with I/O redirectors and pipes. This is the classic "building-block" philosophy of UNIX that is another reason for its great popularity with programmers. The building-block technique lets us write a first version of the script that is only one line long:

    sort -nr $1 | head -${2:-10}
    

    Here is how this works: the sort (1) program sorts the data in the file whose name is given as the first argument ( $1 ). The -n option tells sort to interpret the first word on each line as a number (instead of as a character string); the -r tells it to reverse the comparisons, so as to sort in descending order.

    The output of sort is piped into the head (1) utility, which, when given the argument - N , prints the first N lines of its input on the standard output. The expression -${2:-10} evaluates to a dash ( - ) followed by the second argument if it is given, or to -10 if it's not; notice that the variable in this expression is 2 , which is the second positional parameter.

    Assume the script we want to write is called highest . Then if the user types highest myfile , the line that actually runs is:

    sort -nr myfile | head -10
    

    Or if the user types highest myfile 22 , the line that runs is:

    sort -nr myfile | head -22
    

    Make sure you understand how the :- string operator provides a default value.

    This is a perfectly good, runnable script-but it has a few problems. First, its one line is a bit cryptic. While this isn't much of a problem for such a tiny script, it's not wise to write long, elaborate scripts in this manner. A few minor changes will make the code more readable.

    First, we can add comments to the code; anything between # and the end of a line is a comment. At a minimum, the script should start with a few comment lines that indicate what the script does and what arguments it accepts. Second, we can improve the variable names by assigning the values of the positional parameters to regular variables with mnemonic names. Finally, we can add blank lines to space things out; blank lines, like comments, are ignored. Here is a more readable version:

    #
    #       highest filename [howmany]
    #
    #       Print howmany highest-numbered lines in file filename.
    #       The input file is assumed to have lines that start with
    #       numbers.  Default for howmany is 10.
    #
    
    filename=$1
    
    howmany=${2:-10}
    sort -nr $filename | head -$howmany
    

    The square brackets around howmany in the comments adhere to the convention in UNIX documentation that square brackets denote optional arguments.

    The changes we just made improve the code's readability but not how it runs. What if the user were to invoke the script without any arguments? Remember that positional parameters default to null if they aren't defined. If there are no arguments, then $1 and $2 are both null. The variable howmany ( $2 ) is set up to default to 10, but there is no default for filename ( $1 ). The result would be that this command runs:

    sort -nr | head -10
    

    As it happens, if sort is called without a filename argument, it expects input to come from standard input, e.g., a pipe (|) or a user's terminal. Since it doesn't have the pipe, it will expect the terminal. This means that the script will appear to hang! Although you could always type [CTRL-D] or [CTRL-C] to get out of the script, a naive user might not know this.

    Therefore we need to make sure that the user supplies at least one argument. There are a few ways of doing this; one of them involves another string operator. We'll replace the line:

    filename=$1
    

    with:

    filename=${1:?"filename missing."}
    

    This will cause two things to happen if a user invokes the script without any arguments: first the shell will print the somewhat unfortunate message:

    highest: 1: filename missing.
    

    to the standard error output. Second, the script will exit without running the remaining code.

    With a somewhat "kludgy" modification, we can get a slightly better error message. Consider this code:

    filename=$1
    filename=${filename:?"missing."}
    

    This results in the message:

    highest: filename: missing.
    

    (Make sure you understand why.) Of course, there are ways of printing whatever message is desired; we'll find out how in Chapter 5 .

    Before we move on, we'll look more closely at the two remaining operators in Table 4.1 and see how we can incorporate them into our task solution. The := operator does roughly the same thing as :- , except that it has the "side effect" of setting the value of the variable to the given word if the variable doesn't exist.

    Therefore we would like to use := in our script in place of :- , but we can't; we'd be trying to set the value of a positional parameter, which is not allowed. But if we replaced:

    howmany=${2:-10}
    

    with just:

    howmany=$2
    

    and moved the substitution down to the actual command line (as we did at the start), then we could use the := operator:

    sort -nr $filename | head -${howmany:=10}
    

    Using := has the added benefit of setting the value of howmany to 10 in case we need it afterwards in later versions of the script.

    The final substitution operator is :+ . Here is how we can use it in our example: Let's say we want to give the user the option of adding a header line to the script's output. If he or she types the option -h , then the output will be preceded by the line:

    ALBUMS  ARTIST
    

    Assume further that this option ends up in the variable header , i.e., $header is -h if the option is set or null if not. (Later we will see how to do this without disturbing the other positional parameters.)

    The expression:

    ${header:+"ALBUMS  ARTIST\n"}
    

    yields null if the variable header is null, or ALBUMS══ARTIST\n if it is non-null. This means that we can put the line:

    print -n ${header:+"ALBUMS  ARTIST\n"}
    

    right before the command line that does the actual work. The -n option to print causes it not to print a LINEFEED after printing its arguments. Therefore this print statement will print nothing-not even a blank line-if header is null; otherwise it will print the header line and a LINEFEED (\n).

    4.3.2 Patterns and Regular Expressions

    We'll continue refining our solution to Task 4-1 later in this chapter. The next type of string operator is used to match portions of a variable's string value against patterns . Patterns, as we saw in Chapter 1 are strings that can contain wildcard characters ( * , ? , and [] for character sets and ranges).

    Wildcards have been standard features of all UNIX shells going back (at least) to the Version 6 Bourne shell. But the Korn shell is the first shell to add to their capabilities. It adds a set of operators, called regular expression (or regexp for short) operators, that give it much of the string-matching power of advanced UNIX utilities like awk (1), egrep (1) (extended grep (1)) and the emacs editor, albeit with a different syntax. These capabilities go beyond those that you may be used to in other UNIX utilities like grep , sed (1) and vi (1).

    Advanced UNIX users will find the Korn shell's regular expression capabilities occasionally useful for script writing, although they border on overkill. (Part of the problem is the inevitable syntactic clash with the shell's myriad other special characters.) Therefore we won't go into great detail about regular expressions here. For more comprehensive information, the "last word" on practical regular expressions in UNIX is sed & awk , an O'Reilly Nutshell Handbook by Dale Dougherty. If you are already comfortable with awk or egrep , you may want to skip the following introductory section and go to "Korn Shell Versus awk/egrep Regular Expressions" below, where we explain the shell's regular expression mechanism by comparing it with the syntax used in those two utilities. Otherwise, read on.

    4.3.2.1 Regular expression basics

    Think of regular expressions as strings that match patterns more powerfully than the standard shell wildcard schema. Regular expressions began as an idea in theoretical computer science, but they have found their way into many nooks and crannies of everyday, practical computing. The syntax used to represent them may vary, but the concepts are very much the same.

    A shell regular expression can contain regular characters, standard wildcard characters, and additional operators that are more powerful than wildcards. Each such operator has the form x ( exp ) , where x is the particular operator and exp is any regular expression (often simply a regular string). The operator determines how many occurrences of exp a string that matches the pattern can contain. See Table 4.2 and Table 4.3 .

    Table 4.2: Regular Expression Operators
    Operator Meaning
    * ( exp ) 0 or more occurrences of exp
    + ( exp ) 1 or more occurrences of exp
    ? ( exp ) 0 or 1 occurrences of exp
    @ ( exp1 | exp2 |...) exp1 or exp2 or...
    ! ( exp ) Anything that doesn't match exp [8]

    [8] Actually, !( exp ) is not a regular expression operator by the standard technical definition, though it is a handy extension.

    Table 4.3: Regular Expression Operator Examples
    Expression Matches
    x x
    * ( x ) Null string, x , xx , xxx , ...
    + ( x ) x , xx , xxx , ...
    ? ( x ) Null string, x
    ! ( x ) Any string except x
    @ ( x ) x (see below)

    Regular expressions are extremely useful when dealing with arbitrary text, as you already know if you have used grep or the regular-expression capabilities of any UNIX editor. They aren't nearly as useful for matching filenames and other simple types of information with which shell users typically work. Furthermore, most things you can do with the shell's regular expression operators can also be done (though possibly with more keystrokes and less efficiency) by piping the output of a shell command through grep or egrep .

    Nevertheless, here are a few examples of how shell regular expressions can solve filename-listing problems. Some of these will come in handy in later chapters as pieces of solutions to larger tasks.

    1. The emacs editor supports customization files whose names end in .el (for Emacs LISP) or .elc (for Emacs LISP Compiled). List all emacs customization files in the current directory.
    2. In a directory of C source code, list all files that are not necessary. Assume that "necessary" files end in .c or .h , or are named Makefile or README .
    3. Filenames in the VAX/VMS operating system end in a semicolon followed by a version number, e.g., fred.bob;23 . List all VAX/VMS-style filenames in the current directory.

    Here are the solutions:

    1. In the first of these, we are looking for files that end in .el with an optional c . The expression that matches this is * .el ? (c) .
    2. The second example depends on the four standard subexpressions * .c , * .h , Makefile , and README . The entire expression is !( * .c| * .h|Makefile|README) , which matches anything that does not match any of the four possibilities.
    3. The solution to the third example starts with * \ ; : the shell wildcard * followed by a backslash-escaped semicolon. Then, we could use the regular expression +([0-9]) , which matches one or more characters in the range [0-9] , i.e., one or more digits. This is almost correct (and probably close enough), but it doesn't take into account that the first digit cannot be 0. Therefore the correct expression is * \;[1-9] * ([0-9]) , which matches anything that ends with a semicolon, a digit from 1 to 9, and zero or more digits from 0 to 9.

    Regular expression operators are an interesting addition to the Korn shell's features, but you can get along well without them-even if you intend to do a substantial amount of shell programming.

    In our opinion, the shell's authors missed an opportunity to build into the wildcard mechanism the ability to match files by type (regular, directory, executable, etc., as in some of the conditional tests we will see in Chapter 5 ) as well as by name component. We feel that shell programmers would have found this more useful than arcane regular expression operators.

    The following section compares Korn shell regular expressions to analogous features in awk and egrep . If you aren't familiar with these, skip to the section entitled "Pattern-matching Operators."

    4.3.2.2 Korn shell versus awk/egrep regular expressions

    Table 4.4 is an expansion of Table 4.2 : the middle column shows the equivalents in awk / egrep of the shell's regular expression operators.

    Table 4.4: Shell Versus egrep/awk Regular Expression Operators
    Korn Shell egrep/awk Meaning
    * ( exp ) exp * 0 or more occurrences of exp
    +( exp ) exp + 1 or more occurrences of exp
    ? ( exp ) exp ? 0 or 1 occurrences of exp
    @( exp1 | exp2 |...) exp1 | exp2 |... exp1 or exp2 or...
    ! ( exp ) (none) Anything that doesn't match exp

    These equivalents are close but not quite exact. Actually, an exp within any of the Korn shell operators can be a series of exp1 | exp2 |... alternates. But because the shell would interpret an expression like dave|fred|bob as a pipeline of commands, you must use @(dave|fred|bob) for alternates by themselves.

    For example:

    It is worth re-emphasizing that shell regular expressions can still contain standard shell wildcards. Thus, the shell wildcard ? (match any single character) is the equivalent to . in egrep or awk , and the shell's character set operator [ ... ] is the same as in those utilities. [9] For example, the expression +([0-9]) matches a number, i.e., one or more digits. The shell wildcard character * is equivalent to the shell regular expression * ( ?) .

    [9] And, for that matter, the same as in grep , sed , ed , vi , etc.

    A few egrep and awk regexp operators do not have equivalents in the Korn shell. These include:

    The first two pairs are hardly necessary, since the Korn shell doesn't normally operate on text files and does parse strings into words itself.

    4.3.3 Pattern-matching Operators

    Table 4.5 lists the Korn shell's pattern-matching operators.

    Table 4.5: Pattern-matching Operators
    Operator Meaning
    $ { variable # pattern } If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest.
    $ { variable ## pattern } If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest.
    $ { variable % pattern } If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest.
    $ { variable %% pattern } If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest.

    These can be hard to remember, so here's a handy mnemonic device: # matches the front because number signs precede numbers; % matches the rear because percent signs follow numbers.

    The classic use for pattern-matching operators is in stripping off components of pathnames, such as directory prefixes and filename suffixes. With that in mind, here is an example that shows how all of the operators work. Assume that the variable path has the value /home /billr/mem/long.file.name ; then:

    Expression                   Result
    ${path##/*/}                       long.file.name
    ${path#/*/}              billr/mem/long.file.name
    $path              /home/billr/mem/long.file.name
    ${path%.*}         /home/billr/mem/long.file
    ${path%%.*}        /home/billr/mem/long
    

    The two patterns used here are /*/ , which matches anything between two slashes, and . * , which matches a dot followed by anything.

    We will incorporate one of these operators into our next programming task.

    Task 4.2

    You are writing a C compiler, and you want to use the Korn shell for your front-end.[10]

    [10] Don't laugh-many UNIX compilers have shell scripts as front-ends.

    Think of a C compiler as a pipeline of data processing components. C source code is input to the beginning of the pipeline, and object code comes out of the end; there are several steps in between. The shell script's task, among many other things, is to control the flow of data through the components and to designate output files.

    You need to write the part of the script that takes the name of the input C source file and creates from it the name of the output object code file. That is, you must take a filename ending in .c and create a filename that is similar except that it ends in .o .

    The task at hand is to strip the .c off the filename and append .o . A single shell statement will do it:

    objname=${filename%.c}.o
    

    This tells the shell to look at the end of filename for .c . If there is a match, return $filename with the match deleted. So if filename had the value fred.c , the expression ${filename%.c} would return fred . The .o is appended to make the desired fred.o , which is stored in the variable objname .

    If filename had an inappropriate value (without .c ) such as fred.a , the above expression would evaluate to fred.a.o : since there was no match, nothing is deleted from the value of filename , and .o is appended anyway. And, if filename contained more than one dot-e.g., if it were the y.tab.c that is so infamous among compiler writers-the expression would still produce the desired y.tab.o . Notice that this would not be true if we used %% in the expression instead of % . The former operator uses the longest match instead of the shortest, so it would match .tab.o and evaluate to y.o rather than y.tab.o . So the single % is correct in this case.

    A longest-match deletion would be preferable, however, in the following task.

    Task 4.3

    You are implementing a filter that prepares a text file for printer output. You want to put the file's name-without any directory prefix-on the "banner" page. Assume that, in your script, you have the pathname of the file to be printed stored in the variable pathname .

    Clearly the objective is to remove the directory prefix from the pathname. The following line will do it:

    bannername=${pathname##*/}
    

    This solution is similar to the first line in the examples shown before. If pathname were just a filename, the pattern * / (anything followed by a slash) would not match and the value of the expression would be pathname untouched. If pathname were something like fred/bob , the prefix fred/ would match the pattern and be deleted, leaving just bob as the expression's value. The same thing would happen if pathname were something like /dave/pete/fred/bob : since the ## deletes the longest match, it deletes the entire /dave/pete/fred/ .

    If we used # * / instead of ## * / , the expression would have the incorrect value dave/pete/fred/bob , because the shortest instance of "anything followed by a slash" at the beginning of the string is just a slash ( / ).

    The construct $ { variable ## * /} is actually equivalent to the UNIX utility basename (1). basename takes a pathname as argument and returns the filename only; it is meant to be used with the shell's command substitution mechanism (see below). basename is less efficient than $ { variable ##/ * } because it runs in its own separate process rather than within the shell. Another utility, dirname (1), does essentially the opposite of basename : it returns the directory prefix only. It is equivalent to the Korn shell expression $ { variable %/ * } and is less efficient for the same reason.

    4.3.4 Length Operator

    There are two remaining operators on variables. One is $ {# varname }, which returns the length of the value of the variable as a character string. (In Chapter 6 we will see how to treat this and similar values as actual numbers so they can be used in arithmetic expressions.) For example, if filename has the value fred.c , then ${#filename} would have the value 6 . The other operator ( $ {# array [ * ]} ) has to do with array variables, which are also discussed in Chapter 6 .

    http://docstore.mik.ua/orelly/unix2.1/ksh/ch04_03.htm

    [May 10, 2021] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

    IBM is notorious for destroying useful information . This article is no longer available from IBM.
    Jul 20, 2008

    Originally from: |IBM DeveloperWorks

    How to be a more productive Linux systems administrator

    Learn these 10 tricks and you'll be the most powerful Linux systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

    The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

    The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time-and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

    Trick 1: Unmounting the unresponsive DVD drive

    The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

    Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

    # mount /media/cdrom
    # cd /media/cdrom
    # while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

    Now open up a second terminal and try to eject the DVD drive:

    # eject

    You'll get a message like:

    umount: /media/cdrom: device is busy

    Before you free it, let's find out who is using it.

    # fuser /media/cdrom

    You see the process was running and, indeed, it is our fault we can not eject the disk.

    Now, if you are root, you can exercise your godlike powers and kill processes:

    # fuser -k /media/cdrom

    Boom! Just like that, freedom. Now solemnly unmount the drive:

    # eject

    fuser is good.

    Trick 2: Getting your screen back when it's hosed

    Try this:

    # cat /bin/cat

    Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

    You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat-especially if you are doing this on a production machine.

    Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

    # reset

    Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

    Trick 3: Collaboration with screen

    David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

    "Fine," you say. "What machine are you on?"

    David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

    # su - david

    Then you go over to posh:

    # ssh posh

    Once you are there, you run:

    # screen -S foo

    Then you holler at David:

    "Hey David, run the following command on your terminal: # screen -x foo."

    This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

    At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

    The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

    But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

    You can then reattach by running the screen -x foo command again.

    Trick 4: Getting back the root password

    You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

    First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


    Figure 1. GRUB screen after reboot

    Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


    Figure 2. Ready to edit the kernel line

    Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


    Figure 3. Append the argument with the number 1

    Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

    sh-3.00# passwd
    New UNIX password:
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully

    Now you can reboot, and the machine will boot up with your new password.

    Trick 5: SSH back door

    Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

    In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

    In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.

    Figure 4. Poking a hole in the firewall

    Here's how to proceed:

    1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
    2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

      You can do this with the following syntax:

      ~# ssh -R 2222:localhost:22 [email protected]

      Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

      thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

      to keep the machine busy. And minimize the window.

    3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

      root@tech:~# ssh [email protected] .

    4. Once tech is on the blackbox, they can SSH to ginger using the following command:

      thedude@blackbox:~$: ssh -p 2222 root@localhost

    5. Tech will then be prompted for a password. They should enter the root password of ginger.

    6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
    Trick 6: Remote VNC session through an SSH tunnel

    VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

    For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

    You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

    Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

    1. Start a VNC server session on ginger. This is done by running something like:

      root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

      The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

      When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

    2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

      root@ginger:~# ssh -R 5999:localhost:5999 [email protected]

      Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

      thedude@blackbox:~$ vncviewer localhost:99

      That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

    3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

      root@tech:~# ssh -L 5999:localhost:5999 [email protected]

      This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

    4. From tech, VNC to ginger by running the command:

      root@tech:~# vncviewer localhost:99 .

      Tech will now have a VNC session directly to ginger.

    While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

    Let me add a trick to this trick: If tech was running the Windows operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


    Figure 5. Putty can forward SSH ports for tunneling

    If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

    Trick 7: Checking your bandwidth

    Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

    The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

    So they do this. But now the question is: How much bandwidth do they really have?

    Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

    1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

    But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

    # wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

    You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

    tar zxvf iperf*gz
    cd iperf-2.0.2
    ./configure -prefix=/home/bob/perf
    make
    make install

    On ginger, run:

    # /home/bob/perf/bin/iperf -s -f M

    This machine will act as the server and print out performance speeds in MBps.

    On the beckham node, run:

    # /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

    You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

    In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

    I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

    Trick 8: Command-line scripting and utilities

    A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

    For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

    # P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
    done >>/etc/hosts

    Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

    As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

    Assume the SSH is set up to authenticate without a password. Then run:

    # for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
    done | sort | uniq

    A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

    First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

    free -m | grep Mem | awk '{print $2}'

    That command says to:

    This operation is performed on every node.

    Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

    Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

    This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

    What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

    Trick 9: Spying on the console

    Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

    In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

    Trick 10: Random system information collection

    In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

    First, let's gather information about the processor. This is easily done as follows:

    # cat /proc/cpuinfo .

    This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

    A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

    # cat /proc/cpuinfo | grep processor | wc -l .

    I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

    Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

    And to end the list, here's a way to look at the firmware of your system-a method to get the BIOS level and the firmware on the NIC.

    To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

    #dmidecode | less
    ...
    BIOS Information
    Vendor: LENOVO
    Version: 7LET52WW (1.22 )
    Release Date: 08/27/2007
    ...

    This is much more efficient than rebooting your machine and looking at the POST output.

    To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

    # ethtool -i eth0
    driver: e1000
    version: 7.3.20-k2-NAPI
    firmware-version: 0.3-0

    Conclusion

    There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

    I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

    About the author

    Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

    [May 09, 2021] Good Alternatives To Man Pages Every Linux User Needs To Know by Sk

    Images removed. See the original for full text.
    Notable quotes:
    "... you need Ruby 1.8.7+ installed on your machine for this to work. ..."
    | ostechnix.com

    1. Bropages

    The slogan of the Bropages utility is just get to the point . It is true! The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work. To install Ruby on Rails in CentOS and Ubuntu, refer the following guide: The slogan of the Bropages utility is just get to the point . It is true!

    The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work...After After installing gem, all you have to do to install bro pages is:

    $ gem install bropages
    ... The usage is incredibly easy! ...just type:
    $ bro find
    ... The good thing thing is you can upvote or downvote the examples.

    As you see in the above screenshot, we can upvote to first command by entering the following command: As you see in the above screenshot, we can upvote to first command by entering the following command:

    $ bro thanks
    You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top. You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top.
    Bropages.org requires an email address verification to do this
    What's your email address?
    [email protected]
    Great! We're sending an email to [email protected]
    Please enter the verification code: apHelH13ocC7OxTyB7Mo9p
    Great! You're verified! FYI, your email and code are stored locally in ~/.bro
    You just gave thanks to an entry for find!
    You rock!
    To upvote the second command, type:
    $ bro thanks 2
    Similarly, to downvote the first command, run:
    $ bro ...no

    ... ... ...

    2. Cheat

    Cheat is another useful alternative to man pages to learn Unix commands. It allows you to create and view interactive Linux/Unix commands cheatsheets on the command-line. The recommended way to install Cheat is using Pip package manager.,,,

    ... ... ...

    Cheat usage is trivial.

    $ cheat find
    You will be presented with the list of available examples of find command: ... ... ...

    To view help section, run: To view help section, run:

    $ cheat -h
    
    For more details, see project's GitHub repository: For more details, see project's GitHub repository: 3. TLDR Pages

    TLDR is a collection of simplified and community-driven man pages. Unlike man pages, TLDR pages focuses only on practical examples. TLDR can be installed using npm . So, you need NodeJS installed on your machine for this to work.

    To install NodeJS in Linux, refer the following guide. To install NodeJS in Linux, refer the following guide.

    After installing npm, run the following command to install tldr. After installing npm, run the following command to install tldr.
    $ npm install -g tldr
    
    TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. There are many TLDR clients available. You can view them all here

    3.1. Usage To display the documentation of any command, fro example find , run:

    $ tldr find
    You will see the list of available examples of find command. ...To view the list of all commands in the cache, run: To view the list of all commands in the cache, run:
    $ tldr --list-all
    
    ...To update the local cache, run: To update the local cache, run: To update the local cache, run:
    $ tldr -u
    
    Or, Or,
    $ tldr --update
    
    To display the help section, run: To display the help section, run:
    $ tldr -h
    
    For more details, refer TLDR github page.4. TLDR++

    Tldr++ is yet another client to access the TLDR pages. Unlike the other Tldr clients, it is fully interactive .

    5. Tealdeer

    Tealdeer is a fast, un-official tldr client that allows you to access and display Linux commands cheatsheets in your Terminal. The developer of Tealdeer claims it is very fast compared to the official tldr client and other community-supported tldr clients.

    6. tldr.jsx web client

    The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to Open a web browser and navigate to Open a web browser and navigate to https://tldr.ostera.io/ page.

    7. Navi interactive commandline cheatsheet tool

    Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link. Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link.

    8. Manly

    I came across this utility recently and I thought that it would be a worth addition to this list. Say hello to Manly , a compliment to man pages. Manly is written in Python , so you can install it using Pip package manager.

    Manly is slightly different from the above three utilities. It will not display any examples and also you need to mention the flags or options along with the commands. Say for example, the following example won't work:

    $ manly dpkg
    But, if you mention any flag/option of a command, you will get a small description of the given command and its options.
    $ manly dpkg -i -R
    
    View Linux
    $ manly --help
    And also take a look at the project's GitHub page. And also take a look at the project's GitHub page.
    Suggested Read: Suggested Read:

    [May 08, 2021] How To Clone Your Linux Install With Clonezilla

    Notable quotes:
    "... Note: Clonezilla ISO is under 300 MiB in size. As a result, any flash drive with at least 512 MiB of space will work. ..."
    May 08, 2021 | www.addictivetips.com

    ... one of the most popular (and reliable ways) to backup your data with Clonezilla. This tool lets you clone your Linux install. With it, you can load a live USB and easily "clone" hard drives, operating systems and more..

    Downloading Clonezilla Clonezilla is available only as a live operating system. There are multiple versions of the live disk. That being said, we recommend just downloading the ISO file. The stable version of the software is available at Clonezilla.org. On the download page, select your CPU architecture from the dropdown menu (32 bit or 64 bit).

    Then, click "filetype" and click ISO. After all of that, click the download button.

    How to get the new Spotlight-like Microsoft launcher on Windows 10 Pause Unmute Remaining Time -0:36 Making The Live Disk Regardless of the operating system, the fastest and easiest way to make a Linux live-disk is with the Etcher USB imaging tool. Head over to this page to download it. Follow the instructions on the page, as it will explain the three-step process it takes to make a live disk.

    Note: Clonezilla ISO is under 300 MiB in size. As a result, any flash drive with at least 512 MiB of space will work.

    Device To Image Cloning Backing up a Linux installation directly to an image file with Clonezilla is a simple process. To start off, select the "device-image" option in the Clonezilla menu. On the next page, the software gives a whole lot of different ways to create the backup.

    The hard drive image can be saved to a Samba server, an SSH server, NFS and etc. If you're savvy with any of these, select it. If you're a beginner, connect a USB hard drive (or mount a second hard drive connected to the PC) and select the "local_dev" option.

    Selecting "local_dev" prompts Clonezilla to ask the user to set up a hard drive as the destination for the hard drive menu. Look through the listing and select the hard drive you'd like to use. Additionally, use the menu selector to choose what directory on the drive the hard drive image will save to.

    With the storage location set up, the process can begin. Clonezilla asks to run the backup wizard. There are two options: "Beginner" and "Expert". Select "Beginner" to start the process.

    On the next page, tell Clonezilla how to save the hard drive. Select "savedisk" to copy the entire hard drive to one file. Select "saveparts" to backup the drive into separate partition images.

    Restoring Backup Images To restore an image, load Clonezilla and select the "device-image" option. Next, select "local_dev". Use the menu to select the hard drive previously used to save the hard drive image. In the directory browser, select the same options you used to create the image.

    Clonezilla - Downloads

    [May 08, 2021] LFCA- Learn User Account Management Part 5

    May 08, 2021 | www.tecmint.com

    The /etc/gshadow File

    This file contains encrypted or ' shadowed ' passwords for group accounts and, for security reasons, cannot be accessed by regular users. It's only readable by the root user and users with sudo privileges.

    $ sudo cat /etc/gshadow
    
    tecmint:!::
    

    From the far left, the file contains the following fields:

    [May 05, 2021] Machines are expensive

    May 05, 2021 | www.unz.com

    Mancubus , says: May 5, 2021 at 12:54 pm GMT • 5.6 hours ago

    I keep happening on these mentions of manufacturing jobs succumbing to automation, and I can't think of where these people are getting their information.

    I work in manufacturing. Production manufacturing, in fact, involving hundreds, thousands, tens of thousands of parts produced per week. Automation has come a long way, but it also hasn't. A layman might marvel at the technologies while taking a tour of the factory, but upon closer inspection, the returns are greatly diminished in the last two decades. Advances have afforded greater precision, cheaper technologies, but the only reason China is a giant of manufacturing is because labor is cheap. They automate less than Western factories, not more, because humans cost next to nothing, but machines are expensive.

    [May 03, 2021] Do You Replace Your Server Or Go To The Cloud- The Answer May Surprise You

    May 03, 2021 | www.forbes.com

    Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.

    Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company! You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the recent survey by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues. Or perhaps you've stumbled on this analysis that said that small businesses are losing money as a result of ineffective IT management that could be much improved by the use of cloud based services. Or the poll of more than 1,200 small businesses by technology reseller CDW which discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all industries, storage and conferencing and collaboration are the top cloud services and applications."

    So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.

    There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs, ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in Fast Company and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies doing this year when it comes time to replace their servers?

    These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are not going to the cloud. Instead, they've been buying new servers.

    Wait, buying new servers? What about the cloud?

    At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible, conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a change. So they all moved to cloud, right?

    PROMOTED

    https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

    https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

    https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

    Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening in the world of established companies.

    Consider the options. All of my clients' evaluated cloud based hosting services from Amazon , Microsoft and Rackspace . They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office, accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price: approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.

    You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service contract with a reliable local IT firm.

    Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.

    For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per hour. That's still way below using a cloud service.

    No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses who don't want to change their existing applications, is still just too expensive.

    Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys, they subscribe to online backup services, share files on DropBox and Microsoft 's file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like Google Docs or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.

    Paul-Henri Ferrand, President of Dell North America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As more business applications become more available for the cloud, the more likely the trend will continue."

    He's right. Over the next few years the costs will come down. Your beloved internal application will become out of date and your only option will be to migrate to a cloud based application (hopefully provided by the same vendor to ease the transition). Your technology partners will help you and the process will be easier, and less expensive than today. But for now, you may find it makes more sense to just buy a new server. It's OK. You're not alone.

    Besides Forbes, Gene Marks writes weekly for The New York Times and Inc.com .

    Related on Forbes:

    [Apr 29, 2021] Linux tips for using GNU Screen - Opensource.com

    Apr 29, 2021 | opensource.com

    Using GNU Screen

    GNU Screen's basic usage is simple. Launch it with the screen command, and you're placed into the zeroeth window in a Screen session. You may hardly notice anything's changed until you decide you need a new prompt.

    When one terminal window is occupied with an activity (for instance, you've launched a text editor like Vim or Jove , or you're processing video or audio, or running a batch job), you can just open a new one. To open a new window, press Ctrl+A , release, and then press c . This creates a new window on top of your existing window.

    You'll know you're in a new window because your terminal appears to be clear of anything aside from its default prompt. Your other terminal still exists, of course; it's just hiding behind the new one. To traverse through your open windows, press Ctrl+A , release, and then n for next or p for previous . With just two windows open, n and p functionally do the same thing, but you can always open more windows ( Ctrl+A then c ) and walk through them.

    Split screen

    GNU Screen's default behavior is more like a mobile device screen than a desktop: you can only see one window at a time. If you're using GNU Screen because you love to multitask, being able to focus on only one window may seem like a step backward. Luckily, GNU Screen lets you split your terminal into windows within windows.

    To create a horizontal split, press Ctrl+A and then s . This places one window above another, just like window panes. The split space is, however, left unpurposed until you tell it what to display. So after creating a split, you can move into the split pane with Ctrl+A and then Tab . Once there, use Ctrl+A then n to navigate through all your available windows until the content you want to be displayed is in the split pane.

    You can also create vertical splits with Ctrl+A then | (that's a pipe character, or the Shift option of the \ key on most keyboards).

    [Apr 22, 2021] TLDR pages- Simplified Alternative To Linux Man Pages That You'll Love

    Images removed. See the original for full text.
    Apr 22, 2021 | fossbytes.com

    The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. It's an effort to make the experience of using man pages simpler with the help of practical examples. For those who don't know, TLDR is taken from common internet slang Too Long Didn't Read .

    In case you wish to compare, let's take the example of tar command. The usual man page extends over 1,000 lines. It's an archiving utility that's often combined with a compression method like bzip or gzip. Take a look at its man page:

    On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tar's TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility:

    Let's take another example and show you what TLDR pages has to offer when it comes to apt:

    Having shown you how TLDR works and makes your life easier, let's tell you how to install it on your Linux-based operating system.

    How to install and use TLDR pages on Linux?

    The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command:

    sudo apt-get install nodejs
    
    sudo apt-get install npm
    

    In case you're using an OS other than Debian, Ubuntu, or Ubuntu's derivatives, you can use yum, dnf, or pacman package manager as per your convenience.

    [Apr 22, 2021] Alternatives of man in Linux command line

    Images removed. See the original for full text.
    Jan 01, 2020 | www.chuanjin.me

    When we need help in Linux command line, man is usually the first friend we check for more information. But it became my second line support after I met other alternatives, e.g. tldr , cheat and eg .

    tldr

    tldr stands for too long didn't read , it is a simplified and community-driven man pages. Maybe we forget the arguments to a command, or just not patient enough to read the long man document, here tldr comes in, it will provide concise information with examples. And I even contributed a couple of lines code myself to help a little bit with the project on Github. It is very easy to install: npm install -g tldr , and there are many clients available to pick to be able to access the tldr pages. E.g. install Python client with pip install tldr ,

    To display help information, run tldr -h or tldr tldr .

    Take curl as an example

    tldr++

    tldr++ is an interactive tldr client written with go, I just steal the gif from its official site.

    cheat

    Similarly, cheat allows you to create and view interactive cheatsheets on the command-line. It was designed to help remind *nix system administrators of options for commands that they use frequently, but not frequently enough to remember. It is written in Golang, so just download the binary and add it into your PATH .

    eg

    eg provides useful examples with explanations on the command line.

    So I consult tldr , cheat or eg before I ask man and Google.

    [Apr 22, 2021] 5 modern alternatives to essential Linux command-line tools by Ricardo Gerardi

    While some of those tools do provide additional functionality sticking to classic tool makes more sense. So user beware.
    Jun 25, 2020 | opensource.com

    In our daily use of Linux/Unix systems, we use many command-line tools to complete our work and to understand and manage our systems -- tools like du to monitor disk utilization and top to show system resources. Some of these tools have existed for a long time. For example, top was first released in 1984, while du 's first release dates to 1971.

    Over the years, these tools have been modernized and ported to different systems, but, in general, they still follow their original idea, look, and feel.

    These are great tools and essential to many system administrators' workflows. However, in recent years, the open source community has developed alternative tools that offer additional benefits. Some are just eye candy, but others greatly improve usability, making them a great choice to use on modern systems. These include the following five alternatives to the standard Linux command-line tools.

    1. ncdu as a replacement for du

    The NCurses Disk Usage ( ncdu ) tool provides similar results to du but in a curses-based, interactive interface that focuses on the directories that consume most of your disk space. ncdu spends some time analyzing the disk, then displays the results sorted by your most used directories or files, like this:

    ncdu 1.14.2 ~ Use the arrow keys to navigate, press ? for help
    --- /home/rgerardi ------------------------------------------------------------
    96.7 GiB [##########] /libvirt
    33.9 GiB [### ] /.crc
    ...
    Total disk usage: 159.4 GiB Apparent size: 280.8 GiB Items: 561540

    Navigate to each entry by using the arrow keys. If you press Enter on a directory entry, ncdu displays the contents of that directory:

    --- /home/rgerardi/libvirt ----------------------------------------------------
    /..
    91.3 GiB [##########] /images
    5.3 GiB [ ] /media

    You can use that to drill down into the directories and find which files are consuming the most disk space. Return to the previous directory by using the Left arrow key. By default, you can delete files with ncdu by pressing the d key, and it asks for confirmation before deleting a file. If you want to disable this behavior to prevent accidents, use the -r option for read-only access: ncdu -r .

    ncdu is available for many platforms and Linux distributions. For example, you can use dnf to install it on Fedora directly from the official repositories:

    $ sudo dnf install ncdu

    You can find more information about this tool on the ncdu web page .

    2. htop as a replacement for top

    htop is an interactive process viewer similar to top but that provides a nicer user experience out of the box. By default, htop displays the same metrics as top in a pleasant and colorful display.

    By default, htop looks like this:

    htop_small.png

    (Ricardo Gerardi, CC BY-SA 4.0 )

    In contrast to default top :

    top_small.png

    (Ricardo Gerardi, CC BY-SA 4.0 )

    In addition, htop provides system overview information at the top and a command bar at the bottom to trigger commands using the function keys, and you can customize it by pressing F2 to enter the setup screen. In setup, you can change its colors, add or remove metrics, or change display options for the overview bar.

    More Linux resources While you can configure recent versions of top to achieve similar results, htop provides saner default configurations, which makes it a nice and easy to use process viewer.

    To learn more about this project, check the htop home page .

    3. tldr as a replacement for man

    The tldr command-line tool displays simplified command utilization information, mostly including examples. It works as a client for the community tldr pages project .

    This tool is not a replacement for man . The man pages are still the canonical and complete source of information for many tools. However, in some cases, man is too much. Sometimes you don't need all that information about a command; you're just trying to remember the basic options. For example, the man page for the curl command has almost 3,000 lines. In contrast, the tldr for curl is 40 lines long and looks like this:

    $ tldr curl

    # curl
    Transfers data from or to a server.
    Supports most protocols, including HTTP, FTP, and POP3.
    More information: < https: // curl.haxx.se > .

    - Download the contents of an URL to a file:

    curl http: // example.com -o filename

    - Download a file , saving the output under the filename indicated by the URL:

    curl -O http: // example.com / filename

    - Download a file , following [ L ] ocation redirects, and automatically [ C ] ontinuing ( resuming ) a previous file transfer:

    curl -O -L -C - http: // example.com / filename

    - Send form-encoded data ( POST request of type ` application / x-www-form-urlencoded ` ) :

    curl -d 'name=bob' http: // example.com / form
    - Send a request with an extra header, using a custom HTTP method:

    curl -H 'X-My-Header: 123' -X PUT http: // example.com
    - Send data in JSON format, specifying the appropriate content-type header:

    curl -d '{"name":"bob"}' -H 'Content-Type: application/json' http: // example.com / users / 1234

    ... TRUNCATED OUTPUT

    TLDR stands for "too long; didn't read," which is internet slang for a summary of long text. The name is appropriate for this tool because man pages, while useful, are sometimes just too long.

    In Fedora, the tldr client was written in Python. You can install it using dnf . For other client options, consult the tldr pages project .

    In general, the tldr tool requires access to the internet to consult the tldr pages. The Python client in Fedora allows you to download and cache these pages for offline access.

    For more information on tldr , you can use tldr tldr .

    4. jq as a replacement for sed/grep for JSON

    jq is a command-line JSON processor. It's like sed or grep but specifically designed to deal with JSON data. If you're a developer or system administrator who uses JSON in your daily tasks, this is an essential tool in your toolbox.

    The main benefit of jq over generic text-processing tools like grep and sed is that it understands the JSON data structure, allowing you to create complex queries with a single expression.

    To illustrate, imagine you're trying to find the name of the containers in this JSON file:

    {
    "apiVersion" : "v1" ,
    "kind" : "Pod" ,
    "metadata" : {
    "labels" : {
    "app" : "myapp"
    } ,
    "name" : "myapp" ,
    "namespace" : "project1"
    } ,
    "spec" : {
    "containers" : [
    {
    "command" : [
    "sleep" ,
    "3000"
    ] ,
    "image" : "busybox" ,
    "imagePullPolicy" : "IfNotPresent" ,
    "name" : "busybox"
    } ,
    {
    "name" : "nginx" ,
    "image" : "nginx" ,
    "resources" : {} ,
    "imagePullPolicy" : "IfNotPresent"
    }
    ] ,
    "restartPolicy" : "Never"
    }
    }

    If you try to grep directly for name , this is the result:

    $ grep name k8s-pod.json
    "name" : "myapp" ,
    "namespace" : "project1"
    "name" : "busybox"
    "name" : "nginx" ,

    grep returned all lines that contain the word name . You can add a few more options to grep to restrict it and, with some regular-expression manipulation, you can find the names of the containers. To obtain the result you want with jq , use an expression that simulates navigating down the data structure, like this:

    $ jq '.spec.containers[].name' k8s-pod.json
    "busybox"
    "nginx"

    This command gives you the name of both containers. If you're looking for only the name of the second container, add the array element index to the expression:

    $ jq '.spec.containers[1].name' k8s-pod.json
    "nginx"

    Because jq is aware of the data structure, it provides the same results even if the file format changes slightly. grep and sed may provide different results with small changes to the format.

    jq has many features, and covering them all would require another article. For more information, consult the jq project page , the man pages, or tldr jq .

    5. fd as a replacement for find

    fd is a simple and fast alternative to the find command. It does not aim to replace the complete functionality find provides; instead, it provides some sane defaults that help a lot in certain scenarios.

    For example, when searching for source-code files in a directory that contains a Git repository, fd automatically excludes hidden files and directories, including the .git directory, as well as ignoring patterns from the .gitignore file. In general, it provides faster searches with more relevant results on the first try.

    By default, fd runs a case-insensitive pattern search in the current directory with colored output. The same search using find requires you to provide additional command-line parameters. For example, to search all markdown files ( .md or .MD ) in the current directory, the find command is this:

    $ find . -iname "*.md"

    Here is the same search with fd :

    $ fd .md

    In some cases, fd requires additional options; for example, if you want to include hidden files and directories, you must use the option -H , while this is not required in find .

    fd is available for many Linux distributions. Install it in Fedora using the standard repositories:

    $ sudo dnf install fd-find

    For more information, consult the fd GitHub repository .

    ... ... ...

    S Arun-Kumar on 25 Jun 2020

    I use "meld" in place of "diff" Ricardo Gerardi on 25 Jun 2020

    Thanks ! I never used "meld". I'll give it a try.
    Keith Peters on 25 Jun 2020

    exa for ls Ricardo Gerardi on 25 Jun 2020

    Thanks. I'll give it a try. brick on 27 Jun 2020

    Another (fancy looking) alternative for ls is lsd. Miguel Perez on 25 Jun 2020

    Bat instead of cat, ripgrep instead of grep, httpie instead of curl, bashtop instead of htop, autojump instead of cd... Drto on 25 Jun 2020

    ack instead of grep for files. Million times faster.
    Gordon Harris on 25 Jun 2020

    The yq command line utility is useful too. It's just like jq, except for yaml files and has the ability to convert yaml into json.
    Matt howard on 26 Jun 2020

    Glances is a great top replacement too Paul M on 26 Jun 2020

    Try "mtr" instead of traceroute
    Try "hping2" instead of ping
    Try "pigz" instead of gzip jmtd on 28 Jun 2020

    I've never used ncdu, but I recommend "duc" as a du replacement https://github.com/zevv/duc/

    You run a separate "duc index" command to capture disk space usage in a database file and then can explore the data very quickly with "duc ui" ncurses ui. There's also GUI and web front-ends that give you a nice graphical pie chart interface.

    In my experience the index stage is faster than plain du. You can choose to re-index only certain folders if you want to update some data quickly without rescanning everything.

    wurn on 29 Jun 2020

    Imho, jq uses a syntax that's ok for simple queries but quickly becomes horrible when you need more complex queries. Pjy is a sensible replacement for jq, having an (improved) python syntax which is familiar to many people and much more readable: https://github.com/hydrargyrum/pjy
    Jack Orenstein on 29 Jun 2020

    Also along the lines of command-line alternatives, take a look at marcel, which is a modern shell: https://marceltheshell.org . The basic idea is to pipe Python values instead of strings, between commands. It integrates smoothly with host commands (and, presumably, the alternatives discussed here), and also integrates remote access and database access. Ricardo Fraile on 05 Jul 2020

    "tuptime" instead of "uptime".
    It tracks the history of the system, not only the current one. The Cube on 07 Jul 2020

    One downside of all of this is that there are even more things to remember. I learned find, diff, cat, vi (and ed), grep and a few others starting in 1976 on 6th edition. They have been enhanced some, over the years (for which I use man when I need to remember), and learned top and other things as I needed them, but things I did back then still work great now. KISS is still a "thing". Especially in scripts one is going to use on a wide variety of distributions or for a long time. These kind of tweaks are fun and all, but add complexity and reduce one's inter-system mobility. (And don't get me started on systemd 8P).

    [Apr 22, 2021] replace(1) - Linux manual page

    Apr 22, 2021 | www.man7.org
    REPLACE(1)               MariaDB Database System              REPLACE(1)
    
    NAME top
           replace - a string-replacement utility
    
    SYNOPSIS top
           replace arguments
    
    DESCRIPTION top
           The replace utility program changes strings in place in files or
           on the standard input.
    
           Invoke replace in one of the following ways:
    
               shell> replace from to [from to] ... -- file_name [file_name] ...
               shell> replace from to [from to] ... < file_name
    
           from represents a string to look for and to represents its
           replacement. There can be one or more pairs of strings.
    
           Use the -- option to indicate where the string-replacement list
           ends and the file names begin. In this case, any file named on
           the command line is modified in place, so you may want to make a
           copy of the original before converting it.  replace prints a
           message indicating which of the input files it actually modifies.
    
           If the -- option is not given, replace reads the standard input
           and writes to the standard output.
    
           replace uses a finite state machine to match longer strings
           first. It can be used to swap strings. For example, the following
           command swaps a and b in the given files, file1 and file2:
    
               shell> replace a b b a -- file1 file2 ...
    
           The replace program is used by msql2mysql. See msql2mysql(1).
    
           replace supports the following options.
    
           •   -?, -I
    
               Display a help message and exit.
    
           •   -#debug_options
    
               Enable debugging.
    
           •   -s
    
               Silent mode. Print less information what the program does.
    
           •   -v
    
               Verbose mode. Print more information about what the program
               does.
    
           •   -V
    
               Display version information and exit.
    
    COPYRIGHT top
           Copyright 2007-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.,
           2010-2015 MariaDB Foundation
    
           This documentation is free software; you can redistribute it
           and/or modify it only under the terms of the GNU General Public
           License as published by the Free Software Foundation; version 2
           of the License.
    
           This documentation is distributed in the hope that it will be
           useful, but WITHOUT ANY WARRANTY; without even the implied
           warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
           See the GNU General Public License for more details.
    
           You should have received a copy of the GNU General Public License
           along with the program; if not, write to the Free Software
           Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
           02110-1335 USA or see http://www.gnu.org/licenses/.
    
    SEE ALSO top
           For more information, please refer to the MariaDB Knowledge Base,
           available online at https://mariadb.com/kb/
    
    AUTHOR top
           MariaDB Foundation (http://www.mariadb.org/).
    
    COLOPHON top
           This page is part of the MariaDB (MariaDB database server)
           project.  Information about the project can be found at 
           ⟨http://mariadb.org/⟩.  If you have a bug report for this manual
           page, see ⟨https://mariadb.com/kb/en/mariadb/reporting-bugs/⟩.
           This page was obtained from the project's upstream Git repository
           ⟨https://github.com/MariaDB/server⟩ on 2021-04-01.  (At that
           time, the date of the most recent commit that was found in the
           repository was 2020-11-03.)  If you discover any rendering
           problems in this HTML version of the page, or you believe there
           is a better or more up-to-date source for the page, or you have
           corrections or improvements to the information in this COLOPHON
           (which is not part o
    

    [Apr 19, 2021] How To Display Linux Commands Cheatsheets Using Eg

    Apr 19, 2021 | ostechnix.com

    Eg is a free, open source program written in Python language and the code is freely available in GitHub. For those wondering, eg comes from the Latin word "Exempli Gratia" that literally means "for the sake of example" in English. Exempli Gratia is known by its abbreviation e.g. , in English speaking countries.

    Install Eg in Linux

    Eg can be installed using Pip package manager. If Pip is not available in your system, install it as described in the below link.

    After installing Pip, run the following command to install eg on your Linux system:

    $ pip install eg
    
    Display Linux commands cheatsheets using Eg

    Let us start by displaying the help section of eg program. To do so, run eg without any options:

    $ eg
    

    Sample output:

    usage: eg [-h] [-v] [-f CONFIG_FILE] [-e] [--examples-dir EXAMPLES_DIR]
              [-c CUSTOM_DIR] [-p PAGER_CMD] [-l] [--color] [-s] [--no-color]
              [program]
    
    eg provides examples of common command usage.
    
    positional arguments:
      program               The program for which to display examples.
    
    optional arguments:
      -h, --help            show this help message and exit
      -v, --version         Display version information about eg
      -f CONFIG_FILE, --config-file CONFIG_FILE
                            Path to the .egrc file, if it is not in the default
                            location.
      -e, --edit            Edit the custom examples for the given command. If
                            editor-cmd is not set in your .egrc and $VISUAL and
                            $EDITOR are not set, prints a message and does
                            nothing.
      --examples-dir EXAMPLES_DIR
                            The location to the examples/ dir that ships with eg
      -c CUSTOM_DIR, --custom-dir CUSTOM_DIR
                            Path to a directory containing user-defined examples.
      -p PAGER_CMD, --pager-cmd PAGER_CMD
                            String literal that will be invoked to page output.
      -l, --list            Show all the programs with eg entries.
      --color               Colorize output.
      -s, --squeeze         Show fewer blank lines in output.
      --no-color            Do not colorize output.
    

    You can also bring the help section using this command too:

    $ eg --help
    

    Now let us see how to view example commands usage.

    To display cheatsheet of a Linux command, for example grep , run:

    $ eg grep
    

    Sample output:

    grep
     print all lines containing foo in input.txt
     grep "foo" input.txt
     print all lines matching the regex "^start" in input.txt
     grep -e "^start" input.txt
     print all lines containing bar by recursively searching a directory
     grep -r "bar" directory
     print all lines containing bar ignoring case
     grep -i "bAr" input.txt
     print 3 lines of context before and after each line matching "foo"
     grep -C 3 "foo" input.txt
     Basic Usage
     Search each line in input_file for a match against pattern and print
     matching lines:
     grep "<pattern>" <input_file>
    [...]
    

    [Apr 19, 2021] IBM returns to sales growth after a year of declines on cloud strength

    They are probably mistaken about one trillion market opportunity.
    Apr 19, 2021 | finance.yahoo.com

    The 109-year-old firm is preparing to split itself into two public companies, with the namesake firm narrowing its focus on the so-called hybrid cloud, where it sees a $1 trillion market opportunity.

    [Apr 19, 2021] How to Install and Use locate Command in Linux

    Apr 19, 2021 | www.linuxshelltips.com

    Before using the locate command you should check if it is installed in your machine. A locate command comes with GNU findutils or GNU mlocate packages. You can simply run the following command to check if locate is installed or not.

    $ which locate
    
    Check locate Command
    Check locate Command

    If locate is not installed by default then you can run the following commands to install.

    $ sudo yum install mlocate     [On CentOS/RHEL/Fedora]
    $ sudo apt install mlocate     [On Debian/Ubuntu/Mint]
    

    Once the installation is completed you need to run the following command to update the locate database to quickly get the file location. That's how your result is faster when you use the locate command to find files in Linux.

    $ sudo updatedb
    

    The mlocate db file is located at /var/lib/mlocate/mlocate.db .

    $ ls -l /var/lib/mlocate/mlocate.db
    
    mlocate database
    mlocate database

    A good place to start and get to know about locate command is using the man page.

    $ man locate
    
    locate command manpage
    locate command manpage
    How to Use locate Command to Find Files Faster in Linux

    To search for any files simply pass the file name as an argument to locate command.

    $ locate .bashrc
    
    Locate Files in Linux
    Locate Files in Linux

    If you wish to see how many matched items instead of printing the location of the file you can pass the -c flag.

    $ sudo locate -c .bashrc
    
    Find File Count Occurrence
    Find File Count Occurrence

    By default locate command is set to be case sensitive. You can make the search to be case insensitive by using the -i flag.

    $ sudo locate -i file1.sh
    
    Find Files Case Sensitive in Linux
    Find Files Case Sensitive in Linux

    You can limit the search result by using the -n flag.

    $ sudo locate -n 3 .bashrc
    
    Limit Search Results
    Limit Search Results

    When you delete a file and if you did not update the mlocate database it will still print the deleted file in output. You have two options now either to update mlocate db periodically or use -e flag which will skip the deleted files.

    $ locate -i -e file1.sh
    
    Skip Deleted Files
    Skip Deleted Files

    You can check the statistics of the mlocate database by running the following command.

    $ locate -S
    
    mlocate database stats
    mlocate database stats

    If your db file is in a different location then you may want to use -d flag followed by mlocate db path and filename to be searched for.

    $ locate -d [ DB PATH ] [ FILENAME ]
    

    Sometimes you may encounter an error, you can suppress the error messages by running the command with the -q flag.

    $ locate -q [ FILENAME ]
    

    That's it for this article. We have shown you all the basic operations you can do with locate command. It will be a handy tool for you when working on the command line.

    [Apr 13, 2021] West Virginia will now give you $12,000 to move to its state and work remotely

    Apr 13, 2021 | finance.yahoo.com


    More content below More content below More content below More content below Brian Sozzi Editor-at-Large Mon, April 12, 2021, 12:54 PM

    West Virginia is opening up its arms -- and importantly its wallet -- to lure in those likely to be working from home for some time after the COVID-19 pandemic .

    The state announced on Monday it would give people $12,000 cash with no strings attached to move to its confines. Also included is one year of free recreation at the state's various public lands, which it values at $2,500. Once all the particulars of the plan are added up, West Virginia says the total value to a person is $20,000.

    The initiative is being made possible after a $25 million donation from Intuit's executive chairman (and former long-time CEO) Brad D. Smith and his wife Alys.

    "I have the opportunity to spend a lot of time speaking with my peers in the industry in Silicon Valley as well as across the world. Most are looking at a hybrid model, but many of them -- if not all of them -- have expanded the percentage of their workforce that can work full-time remotely," Smith told Yahoo Finance Live about the plan.

    Smith earned his bachelor's degree in business administration from Marshall University in West Virginia.

    3D rendering of the flag of West Virginia on satin texture. Credit: Getty

    Added Smith, "I think we have seen the pendulum swing all the way to the right when everyone had to come to the office and then all the way to left when everyone was forced to shelter in place. And somewhere in the middle, we'll all be experimenting in the next year or so to see where is that sweet-spot. But I do know employees now have gotten a taste for what it's like to be able to live in a new area with less commute time, less access to outdoor amenities like West Virginia has to offer. I think that's absolutely going to become part of the consideration set in this war for talent."

    That war for talent post-pandemic could be about to heat up within corporate America, and perhaps spur states to follow West Virginia's lead.

    The likes of Facebook, Twitter and Apple are among those big companies poised to have hybrid workforces for years after the pandemic. That has some employees considering moves to lower cost states and those that offer better overall qualities of life.

    A recent study out of Gartner found that 82% of respondents intend to permit remote working some of the time as employees return to the workplace. Meanwhile, 47% plan to let employees work remotely permanently.

    Brian Sozzi is an editor-at-large and anchor at Yahoo Finance . Follow Sozzi on Twitter @BrianSozzi and on LinkedIn .

    [Apr 10, 2021] How to Use the xargs Command in Linux

    Apr 10, 2021 | www.maketecheasier.com

    ... ... ...

    Cut/Copy Operations

    Xargs , along with the find command, can also be used to copy or move a set of files from one directory to another. For example, to move all the text files that are more than 10 minutes old from the current directory to the previous directory, use the following command:

    find . -name "*.txt" -mmin +10 | xargs -n1 -I '{}' mv '{}' ../
    

    The -I command line option is used by the xargs command to define a replace-string which gets replaced with names read from the output of the find command. Here the replace-string is {} , but it could be anything. For example, you can use "file" as a replace-string.

    find . -name "*.txt" -mmin 10 | xargs -n1 -I 'file' mv 'file' ./practice
    
    How to Tell xargs When to Quit

    Suppose you want to list the details of all the .txt files present in the current directory. As already explained, it can be easily done using the following command:

    find . -name "*.txt" | xargs ls -l
    

    But there is one problem: the xargs command will execute the ls command even if the find command fails to find any .txt file. The following is an example.

    https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=8434584656&adk=2613966457&adf=2688049051&pi=t.ma~as.8434584656&w=700&fwrn=4&lmt=1618101879&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fmastering-xargs-command-linux%2F&flash=0&wgl=1&dt=1618103731474&bpp=31&bdt=493&idt=348&shv=r20210406&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D8cd0bebb139ab380-22b05008cfc600fe%3AT%3D1615567004%3ART%3D1615567004%3AS%3DALNI_MbrBrvKDJYr9qFQ5qDF00dIMBBf3Q&prev_fmts=0x0%2C700x175%2C700x175%2C700x175%2C700x175&nras=1&correlator=1984628299000&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1618103732&ga_hid=1318844730&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=6667&biw=1519&bih=762&scr_x=0&scr_y=0&eid=44735931%2C44740079%2C44739387&oid=3&pvsid=3605228639161633&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=6&uci=a!6&btvi=5&fsb=1&xpc=Zlys0rYt2K&p=https%3A//www.maketecheasier.com&dtd=831

    So you can see that there are no .txt files in the directory, but that didn't stop xargs from executing the ls command. To change this behavior, use the -r command line option:

    find . -name "*.txt" | xargs -r ls -l
    

    [Apr 01, 2021] How to use range and sequence expression in bash by Dan Nanni

    Mar 29, 2021 | www.xmodulo.com

    When you are writing a bash script, there are situations where you need to generate a sequence of numbers or strings . One common use of such sequence data is for loop iteration. When you iterate over a range of numbers, the range may be defined in many different ways (e.g., [0, 1, 2,..., 99, 100], [50, 55, 60,..., 75, 80], [10, 9, 8,..., 1, 0], etc). Loop iteration may not be just over a range of numbers. You may need to iterate over a sequence of strings with particular patterns (e.g., incrementing filenames; img001.jpg, img002.jpg, img003.jpg). For this type of loop control, you need to be able to generate a sequence of numbers and/or strings flexibly.

    While you can use a dedicated tool like seq to generate a range of numbers, it is really not necessary to add such external dependency in your bash script when bash itself provides a powerful built-in range function called brace expansion . In this tutorial, let's find out how to generate a sequence of data in bash using brace expansion and what are useful brace expansion examples .

    Brace Expansion

    Bash's built-in range function is realized by so-called brace expansion . In a nutshell, brace expansion allows you to generate a sequence of strings based on supplied string and numeric input data. The syntax of brace expansion is the following.

    {<string1>,<string2>,...,<stringN>}
    {<start-number>..<end-number>}
    {<start-number>..<end-number>..<increment>}
    <prefix-string>{......}
    {......}<suffix-string>
    <prefix-string>{......}<suffix-string>
    

    All these sequence expressions are iterable, meaning you can use them for while/for loops . In the rest of the tutorial, let's go over each of these expressions to clarify their use cases.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311559984&bpp=49&bdt=419&idt=296&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1350&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=Ug4rFEoUn3&p=https%3A//www.xmodulo.com&dtd=306

    Use Case #1: List a Sequence of Strings

    The first use case of brace expansion is a simple string list, which is a comma-separated list of string literals within the braces. Here we are not generating a sequence of data, but simply list a pre-defined sequence of string data.

    {<string1>,<string2>,...,<stringN>}
    

    You can use this brace expansion to iterate over the string list as follows.

    for fruit in {apple,orange,lemon}; do
        echo $fruit
    done
    
    apple
    orange
    lemon
    

    This expression is also useful to invoke a particular command multiple times with different parameters.

    For example, you can create multiple subdirectories in one shot with:

    $ mkdir -p /home/xmodulo/users/{dan,john,alex,michael,emma}
    

    To create multiple empty files:

    $ touch /tmp/{1,2,3,4}.log
    
    Use Case #2: Define a Range of Numbers

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311560086&bpp=3&bdt=522&idt=212&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2661&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=4Qr9I1IICq&p=https%3A//www.xmodulo.com&dtd=230

    The most common use case of brace expansion is to define a range of numbers for loop iteration. For that, you can use the following expressions, where you specify the start/end of the range, as well as an optional increment value.

    {<start-number>..<end-number>}
    {<start-number>..<end-number>..<increment>}
    

    To define a sequence of integers between 10 and 20:

    echo {10..20}
    10 11 12 13 14 15 16 17 18 19 20
    

    You can easily integrate this brace expansion in a loop:

    for num in {10..20}; do
        echo $num
    done
    

    To generate a sequence of numbers with an increment of 2 between 0 and 20:

    echo {0..20..2}
    0 2 4 6 8 10 12 14 16 18 20
    

    You can generate a sequence of decrementing numbers as well:

    echo {20..10}
    20 19 18 17 16 15 14 13 12 11 10
    
    echo {20..10..-2}
    20 18 16 14 12 10
    

    You can also pad the numbers with leading zeros, in case you need to use the same number of digits. For example:

    echo {00..20..2}
    00 02 04 06 08 10 12 14 16 18 20
    
    Use Case #3: Generate a Sequence of Characters

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560089&bpp=13&bdt=524&idt=234&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4285&biw=1519&bih=762&scr_x=0&scr_y=1242&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=QImaZvyQly&p=https%3A//www.xmodulo.com&dtd=27097

    Brace expansion can be used to generate not just a sequence of numbers, but also a sequence of characters.

    {<start-character>..<end-character>}
    

    To generate a sequence of alphabet characters between 'd' and 'p':

    echo {d..p}
    d e f g h i j k l m n o p
    

    You can generate a sequence of upper-case alphabets as well.

    for char1 in {A..B}; do
        for char2 in {A..B}; do
            echo "${char1}${char2}"
        done
    done
    
    AA
    AB
    BA
    BB
    
    Use Case #4: Generate a Sequence of Strings with Prefix/Suffix

    It's possible to add a prefix and/or a suffix to a given brace expression as follows.

    <prefix-string>{......}
    {......}<suffix-string>
    <prefix-string>{......}<suffix-string>
    

    Using this feature, you can easily generate a list of sequentially numbered filenames:

    # create incrementing filenames
    for filename in img_{00..5}.jpg; do
        echo $filename
    done
    
    img_00.jpg
    img_01.jpg
    img_02.jpg
    img_03.jpg
    img_04.jpg
    img_05.jpg
    
    Use Case #5: Combine Multiple Brace Expansions

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560132&bpp=3&bdt=568&idt=193&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762%2C1200x200&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6156&biw=1519&bih=762&scr_x=0&scr_y=3151&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA%2CAGkb-H_oWO6sMjx-sSACXECD6aXL8a7NcIP5miVIHjPj27ExAouRoqV1vRbD0UeQxrrlNTPAZbGg7YubopvUSA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=twNmeHYXl4&p=https%3A//www.xmodulo.com&dtd=41555

    Finally, it's possible to combine multiple brace expansions, in which case the combined expressions will generate all possible combinations of sequence data produced by each expression.

    For example, we have the following script that prints all possible combinations of two-character alphabet strings using double-loop iteration.

    for char1 in {A..Z}; do
        for char2 in {A..Z}; do
            echo "${char1}${char2}"
        done
    done
    

    By combining two brace expansions, the following single loop can produce the same output as above.

    for str in {A..Z}{A..Z}; do
        echo $str
    done
    
    Conclusion

    In this tutorial, I described a bash's built-in mechanism called brace expansion, which allows you to easily generate a sequence of arbitrary strings in a single command line. Brace expansion is useful not just for a bash script, but also in your command line environment (e.g., when you need to run the same command multiple times with different arguments). If you know any useful brace expansion tips and use cases, feel free to share it in the comment.

    If you find this tutorial helpful, I recommend you check out the series of bash shell scripting tutorials provided by Xmodulo.

    [Mar 30, 2021] How to catch and handle errors in bash

    Mar 30, 2021 | www.xmodulo.com

    How to catch and handle errors in bash

    Last updated on March 28, 2021 by Dan Nanni

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=280&slotname=6357311593&adk=3477157422&adf=3251077269&pi=t.ma~as.6357311593&w=1200&fwrn=4&fwrnh=100&lmt=1617039750&rafmt=1&psa=1&format=1200x280&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&fwr=0&fwrattr=true&rpe=1&resp_fmts=3&wgl=1&dt=1617150500578&bpp=19&bdt=670&idt=289&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&correlator=2807789420329&frm=20&pv=2&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=31&ady=254&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeE%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&fsb=1&xpc=FeLkc0yKaB&p=https%3A//www.xmodulo.com&dtd=346

    In an ideal world, things always work as expected, but you know that's hardly the case. The same goes in the world of bash scripting. Writing a robust, bug-free bash script is always challenging even for a seasoned system administrator. Even if you write a perfect bash script, the script may still go awry due to external factors such as invalid input or network problems. While you cannot prevent all errors in your bash script, at least you should try to handle possible error conditions in a more predictable and controlled fashion.

    That is easier said than done, especially since error handling in bash is notoriously difficult. The bash shell does not have any fancy exception swallowing mechanism like try/catch constructs. Some bash errors may be silently ignored but may have consequences down the line. The bash shell does not even have a proper debugger.

    In this tutorial, I'll introduce basic tips to catch and handle errors in bash . Although the presented error handling techniques are not as fancy as those available in other programming languages, hopefully by adopting the practice, you may be able to handle potential bash errors more gracefully.

    Bash Error Handling Tip #1: Check the Exit Status

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500597&bpp=37&bdt=688&idt=355&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1003&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=R4Jgtckaf2&p=https%3A//www.xmodulo.com&dtd=373

    As the first line of defense, it is always recommended to check the exit status of a command, as a non-zero exit status typically indicates some type of error. For example:

    if ! some_command; then
        echo "some_command returned an error"
    fi
    

    Another (more compact) way to trigger error handling based on an exit status is to use an OR list:

    <command1> || <command2>
    

    With this OR statement, <command2> is executed if and only if <command1> returns a non-zero exit status. So you can replace <command2> with your own error handling routine. For example:

    error_exit()
    {
        echo "Error: $1"
        exit 1
    }
    
    run-some-bad-command || error_exit "Some error occurred"
    

    Bash provides a built-in variable called $? , which tells you the exit status of the last executed command. Note that when a bash function is called, $? reads the exit status of the last command called inside the function. Since some non-zero exit codes have special meanings , you can handle them selectively. For example:

    # run some command
    status=$?
    if [ $status -eq 1 ]; then
        echo "General error"
    elif [ $status -eq 2 ]; then
        echo "Misuse of shell builtins"
    elif [ $status -eq 126 ]; then
        echo "Command invoked cannot execute"
    elif [ $status -eq 128 ]; then
        echo "Invalid argument"
    fi
    
    Bash Error Handling Tip #2: Exit on Errors in Bash

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500635&bpp=53&bdt=726&idt=346&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2621&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=xlM0hGwtiw&p=https%3A//www.xmodulo.com&dtd=367

    When you encounter an error in a bash script, by default, it throws an error message to stderr , but continues its execution in the rest of the script. In fact you see the same behavior in a terminal window; even if you type a wrong command by accident, it will not kill your terminal. You will just see the "command not found" error, but you terminal/bash session will still remain.

    This default shell behavior may not be desirable for some bash script. For example, if your script contains a critical code block where no error is allowed, you want your script to exit immediately upon encountering any error inside that code block. To activate this "exit-on-error" behavior in bash, you can use the set command as follows.

    set -e
    #
    # some critical code block where no error is allowed
    #
    set +e
    

    Once called with -e option, the set command causes the bash shell to exit immediately if any subsequent command exits with a non-zero status (caused by an error condition). The +e option turns the shell back to the default mode. set -e is equivalent to set -o errexit . Likewise, set +e is a shorthand command for set +o errexit .

    However, one special error condition not captured by set -e is when an error occurs somewhere inside a pipeline of commands. This is because a pipeline returns a non-zero status only if the last command in the pipeline fails. Any error produced by previous command(s) in the pipeline is not visible outside the pipeline, and so does not kill a bash script. For example:

    set -e
    true | false | true   
    echo "This will be printed"  # "false" inside the pipeline not detected
    

    If you want any failure in pipelines to also exit a bash script, you need to add -o pipefail option. For example:

    set -o pipefail -e
    true | false | true          # "false" inside the pipeline detected correctly
    echo "This will not be printed"
    

    Therefore, to protect a critical code block against any type of command errors or pipeline errors, use the following pair of set commands.

    set -o pipefail -e
    #
    # some critical code block where no error or pipeline error is allowed
    #
    set +o pipefail +e
    
    Bash Error Handling Tip #3: Try and Catch Statements in Bash

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500736&bpp=3&bdt=827&idt=284&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4322&biw=1519&bih=714&scr_x=0&scr_y=1473&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=v8JM1LJbyF&p=https%3A//www.xmodulo.com&dtd=7982

    Although the set command allows you to terminate a bash script upon any error that you deem critical, this mechanism is often not sufficient in more complex bash scripts where different types of errors could happen.

    To be able to detect and handle different types of errors/exceptions more flexibly, you will need try/catch statements, which however are missing in bash. At least we can mimic the behaviors of try/catch as shown in this trycatch.sh script:

    function try()
    {
        [[ $- = *e* ]]; SAVED_OPT_E=$?
        set +e
    }
    
    function throw()
    {
        exit $1
    }
    
    function catch()
    {
        export exception_code=$?
        (( $SAVED_OPT_E )) && set +e
        return $exception_code
    }
    

    Here we define several custom bash functions to mimic the semantic of try and catch statements. The throw() function is supposed to raise a custom (non-zero) exception. We need set +e , so that the non-zero returned by throw() will not terminate a bash script. Inside catch() , we store the value of exception raised by throw() in a bash variable exception_code , so that we can handle the exception in a user-defined fashion.

    Perhaps an example bash script will make it clear how trycatch.sh works. See the example below that utilizes trycatch.sh .

    # Include trybatch.sh as a library
    source ./trycatch.sh
    
    # Define custom exception types
    export ERR_BAD=100
    export ERR_WORSE=101
    export ERR_CRITICAL=102
    
    try
    (
        echo "Start of the try block"
    
        # When a command returns a non-zero, a custom exception is raised.
        run-command || throw $ERR_BAD
        run-command2 || throw $ERR_WORSE
        run-command3 || throw $ERR_CRITICAL
    
        # This statement is not reached if there is any exception raised
        # inside the try block.
        echo "End of the try block"
    )
    catch || {
        case $exception_code in
            $ERR_BAD)
                echo "This error is bad"
            ;;
            $ERR_WORSE)
                echo "This error is worse"
            ;;
            $ERR_CRITICAL)
                echo "This error is critical"
            ;;
            *)
                echo "Unknown error: $exit_code"
                throw $exit_code    # re-throw an unhandled exception
            ;;
        esac
    }
    

    In this example script, we define three types of custom exceptions. We can choose to raise any of these exceptions depending on a given error condition. The OR list <command> || throw <exception> allows us to invoke throw() function with a chosen <exception> value as a parameter, if <command> returns a non-zero exit status. If <command> is completed successfully, throw() function will be ignored. Once an exception is raised, the raised exception can be handled accordingly inside the subsequent catch block. As you can see, this provides a more flexible way of handling different types of error conditions.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500740&bpp=33&bdt=832&idt=288&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714%2C1200x200&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6943&biw=1519&bih=714&scr_x=0&scr_y=4095&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ%2CAGkb-H_LZaKgZXHhi-mp793u920dtCBuBuOdBYfg8GxP5Yl69G1LrubEm-DNODFvz9VDpFX0r4wQgNJ9B_IZKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=cNiz7hdTMs&p=https%3A//www.xmodulo.com&dtd=13575

    Granted, this is not a full-blown try/catch constructs. One limitation of this approach is that the try block is executed in a sub-shell . As you may know, any variables defined in a sub-shell are not visible to its parent shell. Also, you cannot modify the variables that are defined in the parent shell inside the try block, as the parent shell and the sub-shell have separate scopes for variables.

    Conclusion

    In this bash tutorial, I presented basic error handling tips that may come in handy when you want to write a more robust bash script. As expected these tips are not as sophisticated as the error handling constructs available in other programming language. If the bash script you are writing requires more advanced error handling than this, perhaps bash is not the right language for your task. You probably want to turn to other languages such as Python.

    Let me conclude the tutorial by mentioning one essential tool that every shell script writer should be familiar with. ShellCheck is a static analysis tool for shell scripts. It can detect and point out syntax errors, bad coding practice and possible semantic issues in a shell script with much clarity. Definitely check it out if you haven't tried it.

    If you find this tutorial helpful, I recommend you check out the series of bash shell scripting tutorials provided by Xmodulo.

    [Mar 28, 2021] The Fake News about Fake Agile

    Adherents to obscure cult behave exactly the way described. Funny that the author is one of the cultists, a true believer in Agile methodology.
    Aug 23, 2019 | www.iconagility.com

    All politics about fake news aside (PLEASE!), I've heard a growing number of reports, sighs and cries about Fake Agile. It's frustrating when people just don't get it, especially when they think they do. We can point fingers and vilify those who think differently -- or we can try to understand why this "us vs them" mindset is splintering the Agile community....

    [Mar 24, 2021] How To Edit Multiple Files Using Vim Editor by Senthil Kumar

    Images removed. Use the original for full text.
    Mar 24, 2021 | ostechnix.com

    March 17, 2018

    ...Now, let us edit these two files at a time using Vim editor. To do so, run:

    $ vim file1.txt file2.txt
    

    Vim will display the contents of the files in an order. The first file's contents will be shown first and then second file and so on.

    Edit Multiple Files Using Vim Editor

    Edit Multiple Files Using Vim Editor Switch between files

    To move to the next file, type:

    :n
    
    Switch between files in Vim editor

    Switch between files in Vim editor

    To go back to previous file, type:

    :N
    

    Here, N is capital (Type SHIFT+n).

    Start editing the files as the way you do with Vim editor. Press 'i' to switch to interactive mode and modify the contents as per your liking. Once done, press ESC to go back to normal mode.

    Vim won't allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type:

    ZZ
    

    Please note that it is double capital letters ZZ (SHIFT+zz).

    To abandon the changes and move to the previous file, type:

    :N!
    

    To view the files which are being currently edited, type:

    :buffers
    
    View files in buffer in VIm

    View files in buffer in VIm

    You will see the list of loaded files at the bottom.

    List of files in buffer in Vim

    List of files in buffer in Vim

    To switch to the next file, type :buffer followed by the buffer number. For example, to switch to the first file, type:

    :buffer 1
    

    Or, just do:

    :b 1
    
    Switch to next file in Vim

    Switch to next file in Vim

    Just remember these commands to easily switch between buffers:

    :bf            # Go to first file.
    :bl            # Go to last file
    :bn            # Go to next file.
    :bp            # Go to previous file.
    :b number  # Go to n'th file (E.g :b 2)
    :bw            # Close current file.
    
    Opening additional files for editing

    We are currently editing two files namely file1.txt, file2.txt. You might want to open another file named file3.txt for editing. What will you do? It's easy! Just type :e followed by the file name like below.

    :e file3.txt
    
    Open additional files for editing in Vim

    Open additional files for editing in Vim

    Now you can edit file3.txt.

    To view how many files are being edited currently, type:

    :buffers
    
    View all files in buffers in Vim

    View all files in buffers in Vim

    Please note that you can not switch between opened files with :e using either :n or :N . To switch to another file, type :buffer followed by the file buffer number.

    Copying contents of one file into another

    You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.

    To do so, first switch to file1.txt:

    :buffer 1
    

    Place the move cursor in-front of a line that wants to copy and type yy to yank(copy) the line. Then, move to file2.txt:

    :buffer 2
    

    Place the mouse cursor where you want to paste the copied lines from file1.txt and type p . For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type p .

    Sample output:

    line1
    line2
    ostechnix
    line3
    line4
    line5
    
    Copying contents of one file into another file using Vim

    Copying contents of one file into another file using Vim

    To save the changes made in the current file, type:

    ZZ
    

    Again, please note that this is double capital ZZ (SHIFT+z).

    To save the changes in all files and exit vim editor. type:

    :wq
    

    Similarly, you can copy any line from any file to other files.

    Copying entire file contents into another

    We know how to copy a single line. What about the entire file contents? That's also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.

    To do so, open the file2.txt first:

    $ vim file2.txt
    

    If the files are already loaded, you can switch to file2.txt by typing:

    :buffer 2
    

    Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key:

    :r file1.txt
    
    Copying entire contents of a file into another file

    Copying entire contents of a file into another file

    Here, r means read .

    Now you will see the contents of file1.txt is pasted after line5 in file2.txt.

    line1
    line2
    line3
    line4
    line5
    ostechnix
    open source
    technology
    linux
    unix
    
    Copying entire file contents into another file using Vim

    Copying entire file contents into another file using Vim

    To save the changes in the current file, type:

    ZZ
    

    To save all changes in all loaded files and exit vim editor, type:

    :wq
    
    Method 2

    The another method to open multiple files at once is by using either -o or -O flags.

    To open multiple files in horizontal windows, run:

    $ vim -o file1.txt file2.txt
    
    Open multiple files at once in Vim

    Open multiple files at once in Vim

    To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

    To open multiple files in vertical windows, run:

    $ vim -O file1.txt file2.txt file3.txt
    
    Open multiple files in vertical windows in Vim

    Open multiple files in vertical windows in Vim

    To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

    Everything else is same as described in method 1.

    For example, to list currently loaded files, run:

    :buffers
    

    To switch between files:

    :buffer 1
    

    To open an additional file, type:

    :e file3.txt
    

    To copy entire contents of a file into another:

    :r file1.txt
    

    The only difference in method 2 is once you saved the changes in the current file using ZZ , the file will automatically close itself. Also, you need to close the files one by one by typing :wq . But, had you followed the method 1, when typing :wq all changes will be saved in all files and all files will be closed at once.

    For more details, refer man pages.

    $ man vim
    

    [Mar 24, 2021] How To Comment Out Multiple Lines At Once In Vim Editor by Senthil Kumar Images removed. Use the original for full text. Images removed. Use the original for full text.

    Nov 22, 2017 | ostechnix.com

    ...enter the following command:

    :1,3s/^/#
    

    In this case, we are commenting out the lines from 1 to 3. Check the following screenshot. The lines from 1 to 3 have been commented out.

    Comment out multiple lines at once in vim

    Comment out multiple lines at once in vim

    To uncomment those lines, run:

    :1,3s/^#/
    

    Once you're done, unset the line numbers.

    :set nonumber
    

    Let us go ahead and see third method.

    Method 3:

    This one is same as above but slightly different.

    Open the file in vim editor.

    $ vim ostechnix.txt
    

    Set line numbers:

    :set number
    

    Then, type the following command to comment out the lines.

    :1,4s/^/# /
    

    The above command will comment out lines from 1 to 4.

    Comment out multiple lines in vim

    Comment out multiple lines in vim

    Finally, unset the line numbers by typing the following.

    :set nonumber
    
    Method 4:

    This method is suggested by one of our reader Mr.Anand Nande in the comment section below.

    Open file in vim editor:

    $ vim ostechnix.txt
    

    Press Ctrl+V to enter into 'Visual block' mode and press DOWN arrow to select all the lines in your file.

    Select lines in Vim

    Select lines in Vim

    Then, press Shift+i to enter INSERT mode (this will place your cursor on the first line). Press Shift+3 which will insert '#' before your first line.

    Insert '#' before the first line in Vim

    Insert '#' before the first line in Vim

    Finally, press ESC key, and you can now see all lines are commented out.

    Comment out multiple lines using vim

    Comment out multiple lines using vim Method 5:

    This method is suggested by one of our Twitter follower and friend Mr.Tim Chase .

    We can even target lines to comment out by regex. Open the file in vim editor.

    $ vim ostechnix.txt
    

    And type the following:

    :g/\Linux/s/^/# /
    

    The above command will comment out all lines that contains the word "Linux".

    Comment out all lines that contains a specific word in Vim

    Comment out all lines that contains a specific word in Vim

    And, that's all for now. I hope this helps. If you know any other easier method than the given methods here, please let me know in the comment section below. I will check and add them in the guide. Also, have a look at the comment section below. One of our visitor has shared a good guide about Vim usage.

    NUNY3 November 23, 2017 - 8:46 pm

    If you want to be productive in Vim you need to talk with Vim with *language* Vim is using. Every solution that gets out of "normal
    mode" is most probably not the most effective.

    METHOD 1
    Using "normal mode". For example comment first three lines with: I#j.j.
    This is strange isn't it, but:
    I –> capital I jumps to the beginning of row and gets into insert mode
    # –> type actual comment character
    –> exit insert mode and gets back to normal mode
    j –> move down a line
    . –> repeat last command. Last command was: I#
    j –> move down a line
    . –> repeat last command. Last command was: I#
    You get it: After you execute a command, you just repeat j. cobination for the lines you would like to comment out.

    METHOD 2
    There is "command line mode" command to execute "normal mode" command.
    Example: :%norm I#
    Explanation:
    % –> whole file (you can also use range if you like: 1,3 to do only for first three lines).
    norm –> (short for normal)
    I –> is normal command I that is, jump to the first character in line and execute insert
    # –> insert actual character
    You get it, for each range you select, for each of the line normal mode command is executed

    METHOD 3
    This is the method I love the most, because it uses Vim in the "I am talking to Vim" with Vim language principle.
    This is by using extension (plug-in, add-in): https://github.com/tomtom/tcomment_vim extension.
    How to use it? In NORMAL MODE of course to be efficient. Use: gc+action.

    Examples:
    gcap –> comment a paragraph
    gcj –> comment current line and line bellow
    gc3j –> comment current line and 3 lines bellow
    gcgg –> comment current line and all the lines including first line in file
    gcG –> comment current line and all the lines including last line in file
    gcc –> shortcut for comment a current line

    You name it it has all sort of combinations. Remember, you have to talk with Vim, to properly efficially use it.
    Yes sure it also works with "visual mode", so you use it like: V select the lines you would like to mark and execute: gc

    You see if I want to impress a friend I am using gc+action combination. Because I always get: What? How did you do it? My answer it is Vim, you need to talk with the text editor, not using dummy mouse and repeat actions.

    NOTE: Please stop telling people to use DOWN arrow key. Start using h, j, k and l keys to move around. This keys are on home row of typist. DOWN, UP, LEFT and RIGHT key are bed habit used by beginners. It is very inefficient. You have to move your hand from home row to arrow keys.

    VERY IMPORTANT: Do you want to get one million dollar tip for using Vim? Start using Vim like it was designed for use normal mode. Use its language: verbs, nouns, adverbs and adjectives. Interested what I am talking about? You should be, if you are serious about using Vim. Read this one million dollar answer on forum: https://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118 MDEBUSK November 26, 2019 - 7:07 am

    I've tried the "boxes" utility with vim and it can be a lot of fun.

    https://boxes.thomasjensen.com/ SÉRGIO ARAÚJO December 17, 2020 - 4:43 am

    Method 6
    :%norm I#

    [Mar 24, 2021] How To Setup Backup Server Using Rsnapshot by Senthil Kumar

    Apr 13, 2017 | ostechnix.com

    ... ... ...

    Now, edit rsnapshot config file using command:

    $ sudo nano /etc/rsnapshot.conf
    

    The default configuration should just work fine. All you need to to define the backup directories and backup intervals.

    First, let us setup the Root backup directory i.e We need to choose the directory where we want to store the file system back ups. In our case, I will store the back ups in /rsnapbackup/ directory.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=2935052334&adf=2291697075&pi=t.aa~a.4159015635~i.80~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1616633116&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fsetup-backup-server-using-rsnapshot-linux%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8MbrggYQlaj876O1srwUEioAzRwCZrDRDgBUvrQaW5GbXDwh86QENBlw-v7-PR-7DnhX3_cVwCq2ufI&dt=1616633105357&bpp=3&bdt=1341&idt=3&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3De131ae5ed4aa45d7-22a89a6e0dc7000a%3AT%3D1616631806%3ART%3D1616631806%3AS%3DALNI_MYN9WDd7gGVc8V-I7ZewIJezifOTg&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280%2C780x280%2C780x280&nras=5&correlator=877677215578&frm=20&pv=1&ga_vid=1440358310.1616631807&ga_sid=1616633105&ga_hid=2128223842&ga_fc=0&u_tz=-240&u_his=2&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=5878&biw=1519&bih=762&scr_x=0&scr_y=2900&eid=31060287%2C44738185%2C44739387&oid=3&psts=AGkb-H_3MfY9AQf3__CNSVyjoDCpYu_ZaKaiHYqFHQ1wQJDCJhk-2CFzgXs7lxCtimCs29RaZoqMJvVRxIA%2CAGkb-H9jFVqbzgOeUl3vj0ufHziiJDG88wHSpYyHea1_SuZgYgku_spXI7u_Mw5lq5Lx3672kLVBHMXw5w%2CAGkb-H8awkyuv_oJsZhhOe9IPjgFhtTwqlJq7XJ6gfvkEWF40FhbHLmHilOFpHgD-K83h1G7n8vaRUTehfg%2CAGkb-H_ckOyStZCDLNTeIVabiCebw66dSIyH-MfyFZiH6pq4r1inFyrp81fGuJNHKRHVUVrMh_XNbpv-MLw%2CAGkb-H9SM9DZZmFihNrYkWRPSzDdb43TR0v35Yg8f_jeA4jEtFAhWB2AT2V1ONIP_oGSOumj3xM3sJE4GV43sQ%2CAGkb-H9SuZhdVHNjd3JIq9uWz6juU33Nlwy5JKxcDxmnxl-AC1GFKkElCoVRPBCv17-xB6hWLjhR0FtouuW-vw&pvsid=2810665002744857&pem=289&ref=https%3A%2F%2Fostechnix.com%2Fcategory%2Fbackup-tools%2F&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-20-21&ifi=8&uci=a!8&btvi=5&fsb=1&xpc=Z7XUbAeR7w&p=https%3A//ostechnix.com&dtd=10931

    # All snapshots will be stored under this root directory.
    #
    snapshot_root   /rsnapbackup/
    

    Again, you should use TAB key between snapshot_root element and your backup directory.

    Scroll down a bit, and make sure the following lines (marked in bold) are uncommented:

    [...]
    #################################
    # EXTERNAL PROGRAM DEPENDENCIES #
    #################################
    
    # LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
    # EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
    #
    # See the README file or the man page for more details.
    #
    cmd_cp /usr/bin/cp
    
    # uncomment this to use the rm program instead of the built-in perl routine.
    #
    cmd_rm /usr/bin/rm
    
    # rsync must be enabled for anything to work. This is the only command that
    # must be enabled.
    #
    cmd_rsync /usr/bin/rsync
    
    # Uncomment this to enable remote ssh backups over rsync.
    #
    cmd_ssh /usr/bin/ssh
    
    # Comment this out to disable syslog support.
    #
    cmd_logger /usr/bin/logger
    
    # Uncomment this to specify the path to "du" for disk usage checks.
    # If you have an older version of "du", you may also want to check the
    # "du_args" parameter below.
    #
    cmd_du /usr/bin/du
    
    [...]
    

    Next, we need to define the backup intervals:

    #########################################
    # BACKUP LEVELS / INTERVALS #
    # Must be unique and in ascending order #
    # e.g. alpha, beta, gamma, etc. #
    #########################################
    
    retain alpha 6
    retain beta 7
    retain gamma 4
    #retain delta 3
    

    Here, retain alpha 6 means that every time rsnapshot alpha run, it will make a new snapshot, rotate the old ones, and retain the most recent six (alpha.0 - alpha.5). You can define your own intervals. For more details, refer the rsnapshot man pages.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=2935052334&adf=1889294700&pi=t.aa~a.4159015635~i.94~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1616633121&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fsetup-backup-server-using-rsnapshot-linux%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8MbrggYQlaj876O1srwUEioAzRwCZrDRDgBUvrQaW5GbXDwh86QENBlw-v7-PR-7DnhX3_cVwCq2ufI&dt=1616633105367&bpp=2&bdt=1351&idt=2&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3De131ae5ed4aa45d7-22a89a6e0dc7000a%3AT%3D1616631806%3ART%3D1616631806%3AS%3DALNI_MYN9WDd7gGVc8V-I7ZewIJezifOTg&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280%2C780x280%2C780x280%2C780x280&nras=6&correlator=877677215578&frm=20&pv=1&ga_vid=1440358310.1616631807&ga_sid=1616633105&ga_hid=2128223842&ga_fc=0&u_tz=-240&u_his=2&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=7945&biw=1519&bih=762&scr_x=0&scr_y=4898&eid=31060287%2C44738185%2C44739387&oid=3&psts=AGkb-H_3MfY9AQf3__CNSVyjoDCpYu_ZaKaiHYqFHQ1wQJDCJhk-2CFzgXs7lxCtimCs29RaZoqMJvVRxIA%2CAGkb-H9jFVqbzgOeUl3vj0ufHziiJDG88wHSpYyHea1_SuZgYgku_spXI7u_Mw5lq5Lx3672kLVBHMXw5w%2CAGkb-H8awkyuv_oJsZhhOe9IPjgFhtTwqlJq7XJ6gfvkEWF40FhbHLmHilOFpHgD-K83h1G7n8vaRUTehfg%2CAGkb-H_ckOyStZCDLNTeIVabiCebw66dSIyH-MfyFZiH6pq4r1inFyrp81fGuJNHKRHVUVrMh_XNbpv-MLw%2CAGkb-H9SM9DZZmFihNrYkWRPSzDdb43TR0v35Yg8f_jeA4jEtFAhWB2AT2V1ONIP_oGSOumj3xM3sJE4GV43sQ%2CAGkb-H9SuZhdVHNjd3JIq9uWz6juU33Nlwy5JKxcDxmnxl-AC1GFKkElCoVRPBCv17-xB6hWLjhR0FtouuW-vw%2CAGkb-H_vc2WdY5H-Moj-ezEu7IDslUkOhKidPtG9RNqCgdFTwDB78MvRCqHwatWcUx6zfLcmgkpZDH-Ssas&pvsid=2810665002744857&pem=289&ref=https%3A%2F%2Fostechnix.com%2Fcategory%2Fbackup-tools%2F&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-20-21&ifi=9&uci=a!9&btvi=6&fsb=1&xpc=DkVUC47tnJ&p=https%3A//ostechnix.com&dtd=16546

    Next, we need to define the backup directories. Find the following directives in your rsnapshot config file and set the backup directory locations.

    ###############################
    ### BACKUP POINTS / SCRIPTS ###
    ###############################
    
    # LOCALHOST
    backup /root/ostechnix/ server/
    

    Here, I am going to backup the contents of /root/ostechnix/ directory and save them in /rsnapbackup/server/ directory. Please note that I didn't specify the full path (/rsnapbackup/server/ ) in the above configuration. Because, we already mentioned the Root backup directory earlier.

    Likewise, define the your remote client systems backup location.

    # REMOTEHOST
    backup [email protected]:/home/sk/test/ client/
    

    Here, I am going to backup the contents of my remote client system's /home/sk/test/ directory and save them in /rsnapbackup/client/ directory in my Backup server. Again, please note that I didn't specify the full path (/rsnapbackup/client/ ) in the above configuration. Because, we already mentioned the Root backup directory before.

    Save and close /ect/rsnapshot.conf file.

    Once you have made all your changes, run the following command to verify that the config file is syntactically valid.

    rsnapshot configtest
    

    If all is well, you will see the following output.

    Syntax OK
    
    Testing backups

    Run the following command to test backups.

    rsnapshot alpha
    

    This take a few minutes depending upon the size of back ups.

    Verifying backups

    Check the whether the backups are really stored in the Root backup directory in the Backup server.

    ls /rsnapbackup/
    

    You will see the following output:

    alpha.0
    

    Check the alpha.0 directory:

    ls /rsnapbackup/alpha.0/
    

    You will see there are two directories automatically created, one for local backup (server), and another one for remote systems (client).

    client/ server/
    

    Check the client system back ups:

    ls /rsnapbackup/alpha.0/client
    

    Check the server system(local system) back ups:

    ls /rsnapbackup/alpha.0/server
    
    Automate back ups

    You don't/can't run the rsnapshot command to make backup every time. Define a cron job and automate the backup job.

    sudo vi /etc/cron.d/rsnapshot
    

    Add the following lines:

    0 */4 * * *     /usr/bin/rsnapshot alpha
    50 23 * * *     /usr/bin/rsnapshot beta
    00 22 1 * *     /usr/bin/rsnapshot delta
    

    The first line indicates that there will be six alpha snapshots taken each day (at 0,4,8,12,16, and 20 hours), beta snapshots taken every night at 11:50pm, and delta snapshots will be taken at 10pm on the first day of each month. You can adjust timing as per your wish. Save and close the file.

    Done! Rsnapshot will automatically take back ups on the defined time in the cron job. For more details, refer the man pages.

    man rsnapshot
    

    That's all for now. Hope this helps. I will soon here with another interesting guide. If you find this guide useful, please share it on your social, professional networks and support OSTechNix.

    Cheers!

    [Mar 24, 2021] How To Backup Your Entire Linux System Using Rsync by Senthil Kumar

    Apr 25, 2017 | ostechnix.com

    ... ... ..

    To backup the entire system, all you have to do is open your Terminal and run the following command as root user:

    $ sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt
    

    This command will backup the entire root ( / ) directory, excluding /dev, /proc, /sys, /tmp, /run, /mnt, /media, /lost+found directories, and save the data in /mnt folder.

    [Mar 24, 2021] CYA - System Snapshot And Restore Utility For Linux by Senthil Kumar

    Jul 23, 2018 | ostechnix.com

    CYA , stands for C over Y our A ssets, is a free, open source system snapshot and restore utility for any Unix-like operating systems that uses BASH shell. Cya is portable and supports many popular filesystems such as EXT2/3/4, XFS, UFS, GPFS, reiserFS, JFS, BtrFS, and ZFS etc. Please note that Cya will not backup the actual user data . It only backups and restores the operating system itself. Cya is actually a system restore utility . By default, it will backup all key directories like /bin/, /lib/, /usr/, /var/ and several others. You can, however, define your own directories and files path to include in the backup, so Cya will pick those up as well. Also, it is possible define some directories/files to skip from the backup. For example, you can skip /var/logs/ if you don't log files. Cya actually uses Rsync backup method under the hood. However, Cya is little bit easier than Rsync when creating rolling backups.

    When restoring your operating system, Cya will rollback the OS using your backup profile which you created earlier. You can either restore the entire system or any specific directories only. You can also easily access the backup files even without a complete rollback using your terminal or file manager. Another notable feature is we can generate a custom recovery script to automate the mounting of your system partition(s) when you restore off a live CD, USB, or network image. In a nutshell, CYA can help you to restore your system to previous state when you end-up with a broken system caused by software update, configuration changes and intrusions/hacks etc.

    ... ... ...

    Conclusion

    Unlike Systemback and other system restore utilities, Cya is not a distribution-specific restore utility. It supports many Linux operating systems that uses BASH. It is one of the must-have applications in your arsenal. Install it right away and create snapshots. You won't regret when you accidentally crashed your Linux system.

    [Mar 24, 2021] What commands are missing from your bashrc file- - Enable Sysadmin

    Mar 24, 2021 | www.redhat.com

    The idea was that sharing this would inspire others to improve their bashrc savviness. Take a look at what our Sudoers group shared and, please, borrow anything you like to make your sysadmin life easier.

    [ You might also like: Parsing Bash history in Linux ]

    Jonathan Roemer
    # Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc
    alias cp='cp -i'
    alias mv='mv -i'
    alias rm='rm -i'
    
    # Add color, formatting, etc to ls without re-typing a bunch of options every time
    alias ll='ls -alhF'
    alias ls="ls --color"
    # So I don't need to remember the options to tar every time
    alias untar='tar xzvf'
    alias tarup='tar czvf'
    
    # Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around
    alias vim='nvim'
    alias vi='nvim'
    
    Valentin Bajrami

    Here are a few functions from my ~/.bashrc file:

    # Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed
    # Example:  _cp /etc/dnsmasq.conf
_cp()
    {
      local file="$1"
      local st=1
      if [[ -f $file ]]; then
        cat "$file" | xclip -selection clipboard
        st=$?
      else
        printf '%s\n' "Make sure you are copying the content of a file" >&2
      fi
      return $st    
    }
    
    # This is the function to paste the content. The content is now in your buffer.
    # Example: _paste   
    
    _paste()
    {
      xclip -selection cliboard -o
    }
    
    # Generate a random password without installing any external tooling
    genpw()
    {
      alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
    }
    # See what command you are using the most (this parses the history command)
    cm() {
      history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }'
    }
    
    Peter Gervase

    For shutting down at night, I kill all SSH sessions and then kill any VPN connections:

    #!/bin/bash
    /usr/bin/killall ssh
    /usr/bin/nmcli connection down "Raleigh (RDU2)"
    /usr/bin/nmcli connection down "Phoenix (PHX2)"
    
    Valentin Rothberg
    alias vim='nvim'
    alias l='ls -CF --color=always''
    alias cd='cd -P' # follow symlinks
    alias gits='git status'
    alias gitu='git remote update'
    alias gitum='git reset --hard upstream/master'
    
    Steve Ovens
    alias nano='nano -wET 4'
    alias ls='ls --color=auto'
    PS1="\[\e[01;32m\]\u@\h \[\e[01;34m\]\w  \[\e[01;34m\]$\[\e[00m\] "
    export EDITOR=nano
    export AURDEST=/var/cache/pacman/pkg
    PATH=$PATH:/home/stratus/.gem/ruby/2.7.0/bin
    alias mp3youtube='youtube-dl -x --audio-format mp3'
    alias grep='grep --color'
    alias best-youtube='youtube-dl -r 1M --yes-playlist -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]''
    alias mv='mv -vv'
    shopt -s histappend
    HISTCONTROL=ignoreboth
    
    Jason Hibbets

    While my bashrc aliases aren't as sophisticated as the previous technologists, you can probably tell I really like shortcuts:

    # User specific aliases and functions
    
    alias q='exit'
    alias h='cd ~/'
    alias c='clear'
    alias m='man'
    alias lsa='ls -al'
    alias s='sudo su -'
    
    More Linux resources Bonus: Organizing bashrc files and cleaning up files

    We know many sysadmins like to script things to make their work more automated. Here are a few tips from our Sudoers that you might find useful.

    Chris Collins

    I don't know who I need to thank for this, some awesome woman on Twitter whose name I no longer remember, but it's changed the organization of my bash aliases and commands completely.

    I have Ansible drop individual <something>.bashrc files into ~/.bashrc.d/ with any alias or command or shortcut I want, related to any particular technology or Ansible role, and can manage them all separately per host. It's been the best single trick I've learned for .bashrc files ever.

    Git stuff gets a ~/.bashrc.d/git.bashrc , Kubernetes goes in ~/.bashrc.d/kube.bashrc .

    if [ -d ${HOME}/.bashrc.d ]
    then
      for file in ~/.bashrc.d/*.bashrc
      do
        source "${file}"
      done
    fi
    
    Peter Gervase

    These aren't bashrc aliases, but I use them all the time. I wrote a little script named clean for getting rid of excess lines in files. For example, here's nsswitch.conf with lots of comments and blank lines:

    [pgervase@pgervase etc]$ head authselect/nsswitch.conf
    # Generated by authselect on Sun Dec  6 22:12:26 2020
    # Do not modify this file manually.
    
    # If you want to make changes to nsswitch.conf please modify
    # /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'.
    #
    # Note that your changes may not be applied as they may be
    # overwritten by selected profile. Maps set in the authselect
    # profile always take precedence and overwrites the same maps
    # set in the user file. Only maps that are not set by the profile
    
    [pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf
    80 authselect/nsswitch.conf
    
    [pgervase@pgervase etc]$ clean authselect/nsswitch.conf
    passwd:     sss files systemd
    group:      sss files systemd
    netgroup:   sss files
    automount:  sss files
    services:   sss files
    shadow:     files sss
    hosts:      files dns myhostname
    bootparams: files
    ethers:     files
    netmasks:   files
    networks:   files
    protocols:  files
    rpc:        files
    publickey:  files
    aliases:    files
    
    [pgervase@pgervase etc]$ cat `which clean`
    #! /bin/bash
    #
    /bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"
    

    [ Free online course: Red Hat Enterprise Linux technical overview . ]

    [Mar 24, 2021] How to read data from text files by Roberto Nozaki

    Mar 24, 2021 | www.redhat.com

    The following is the script I use to test the servers:

    1     #!/bin/bash
    2     
    3     input_file=hosts.csv
    4     output_file=hosts_tested.csv
    5     
    6     echo "ServerName,IP,PING,DNS,SSH" > "$output_file"
    7     
    8     tail -n +2 "$input_file" | while IFS=, read -r host ip _
    9     do
    10        if ping -c 3 "$ip" > /dev/null; then
    11            ping_status="OK"
    12        else
    13            ping_status="FAIL"
    14        fi
    15    
    16        if nslookup "$host" > /dev/null; then
    17            dns_status="OK"
    18        else
    19            dns_status="FAIL"
    20        fi
    21    
    22        if nc -z -w3 "$ip" 22 > /dev/null; then
    23            ssh_status="OK"
    24        else
    25            ssh_status="FAIL"
    26        fi
    27    
    28        echo "Host = $host IP = $ip" PING_STATUS = $ping_status DNS_STATUS = $dns_status SSH_STATUS = $ssh_status
    29        echo "$host,$ip,$ping_status,$dns_status,$ssh_status" >> $output_file
    30    done
    

    [Mar 17, 2021] Year of Living Remotely by Angus Loten

    Mar 12, 2021 | www.wsj.com

    In the last week of April, Zoom reported that the number of daily users on its platform grew to more than 300 million , up from 10 million at the end of 2019.

    Wayne Kurtzman, a research director at International Data Corp., said the crisis has accelerated the adoption of videoconferencing and other collaboration tools by roughly five years.

    It has also driven innovation. New features expected in the year ahead include the use of artificial intelligence to enable real-time transcription and translation, informing people when they were mentioned in a meeting and why, and creating a short "greatest hits" version of meetings they may have missed, Mr. Kurtzman said.

    Many businesses also ramped up their use of software bots , among other forms of automation, to handle routine workplace tasks like data entry and invoice processing.

    The attention focused on keeping operations running saw many companies pull back on some long-running IT modernization efforts, or plans to build out ambitious data analytics and business intelligence systems.

    Bob Parker, a senior vice president for industry research at IDC, said many companies were simply channeling funds to more urgent needs. But another key obstacle was an inability to access on-site resources to continue pre-Covid initiatives, he said, "especially for projects requiring significant process re-engineering," such as enterprise resource planning implementations and upgrades.

    Related Video

    [Mar 14, 2021] while loops in Bash

    Mar 14, 2021 | www.redhat.com
    while true
    do
      df -k | grep home
      sleep 1
    done
    

    In this case, you're running the loop with a true condition, which means it will run forever or until you hit CTRL-C. Therefore, you need to keep an eye on it (otherwise, it will remain using the system's resources).

    Note : If you use a loop like this, you need to include a command like sleep to give the system some time to breathe between executions. Running anything non-stop could become a performance issue, especially if the commands inside the loop involve I/O operations.

    2. Waiting for a condition to become true

    There are variations of this scenario. For example, you know that at some point, the process will create a directory, and you are just waiting for that moment to perform other validations.

    You can have a while loop to keep checking for that directory's existence and only write a message while the directory does not exist.

    https://asciinema.org/a/BQN8CDagw6k8bSbGJPYi5kqpg/embed?

    If you want to do something more elaborate, you could create a script and show a more clear indication that the loop condition became true:

    #!/bin/bash
    
    while [ ! -d directory_expected ]
    do
       echo "`date` - Still waiting" 
       sleep 1
    done
    
    echo "DIRECTORY IS THERE!!!"
    
    More about automation 3. Using a while loop to manipulate a file

    Another useful application of a while loop is to combine it with the read command to have access to columns (or fields) quickly from a text file and perform some actions on them.

    In the following example, you are simply picking the columns from a text file with a predictable format and printing the values that you want to use to populate an /etc/hosts file.

    https://asciinema.org/a/2b1u28XqoC7j7Muhd5zXqHkYP/embed?

    Here the assumption is that the file has columns delimited by spaces or tabs and that there are no spaces in the content of the columns. That could shift the content of the fields and not give you what you needed.

    Notice that you're just doing a simple operation to extract and manipulate information and not concerned about the command's reusability. I would classify this as one of those "quick and dirty tricks."

    Of course, if this was something that you would repeatedly do, you should run it from a script, use proper names for the variables, and all those good practices (including transforming the filename in an argument and defining where to send the output, but today, the topic is while loops).

    #!/bin/bash
    
    cat servers.txt | grep -v CPU | while read servername cpu ram ip
    do
       echo $ip $servername
    done
    

    [Mar 14, 2021] 7Zip 21.0 Provides Native Linux Support by Georgio Baremmi

    Mar 12, 2021 | www.putorius.net

    7zip is a wildly popular Windows program that is used to create archives. By default it uses 7z format which it claims is 30-70% better than the normal zip format. It also claims to compress to the regular zip format 2-10% more effectively than other zip compatible programs. It supports a wide variety of archive formats including (but not limited to) zip, gzip, bzip2, tar , and rar. Linux has had p7zip for a long time. However, this is the first time 7Zip developers have provided native Linux support.

    Jump to Installation Instructions

    p7zip vs 7Zip What's the Difference

    Linux has has p7zip for some time now. The p7zip is a port of the Windows 7zip package over to Linux/Unix. For the average user there is no difference. The p7zip package is a direct port from 7zip.

    Why Bother Using 7zip if p7zip is available?

    The main reason to use the new native Linux version of 7Zip is updates. The p7zip package that comes with my Fedora installation is version 16.02 from 2016. However, the newly installed 7zip version is 21.01 (alpha) which was released just a few days ago.

    Details from p7zip Package

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-8729877671232535&output=html&h=336&slotname=4973377891&adk=4038654811&adf=967866540&pi=t.ma~as.4973377891&w=403&fwrn=4&lmt=1615657036&rafmt=11&psa=1&format=403x336&url=https%3A%2F%2Fwww.putorius.net%2F7zip-21-0-provides-native-linux-support.html&flash=0&wgl=1&dt=1615675808980&bpp=8&bdt=1381&idt=165&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D1332e2a93e988e76-22a5b15fc8c6008f%3AT%3D1614981944%3ART%3D1614981944%3AS%3DALNI_MYTeS1RGk90N23EOEh83ZWg8Wg_2g&prev_fmts=875x95&correlator=5512722112914&frm=20&pv=1&ga_vid=469254566.1614981946&ga_sid=1615675809&ga_hid=2026024679&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1172&biw=1519&bih=714&scr_x=0&scr_y=0&eid=44735932%2C44736525%2C21068083%2C31060305&oid=3&pvsid=1982617572041203&pem=157&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CoEebr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=O5pymAxPH1&p=https%3A//www.putorius.net&dtd=175

    7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
    

    Details from Native 7Zip Package

    7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
    
    Install Native 7Zip on Linux Command Line

    First, we need to download the tar.zx package from the 7Zip website.

    wget https://www.7-zip.org/a/7z2101-linux-x64.tar.xz
    

    Next we extract the tar archive . Here I am extracting it to /home/gbaremmi/bin/ since that directory is in my PATH .

    tar xvf 7z2101-linux-x64.tar.xz -C ~/bin/
    

    That's it, you are now ready to use 7Zip.

    If you have previously has the p7zip package installed you now have two similar commands. The p7zip package provides the 7z command. While the new native version of 7Zip provides the 7zz command.

    Using Native 7Zip (7zz) in Linux

    7Zip comes with a great deal of options. This full suite of options are beyond the scope of this article. Here we will cover basic archive creation and extraction.

    Creating a 7z Archive with Native Linux 7Zip (7zz)

    To create a 7z archive, we will call the newly install 7zz utiltiy and pass the a (add files to archive) command. We will then supply the name of the archive, and the files we want added.

    [gbaremmi@putor ~]$ 7zz a words.7z dict-words/*
    
    7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
     compiler: 9.3.0 GCC 9.3.0 64-bit locale=en_US.UTF-8 Utf16=on HugeFiles=on CPUs:4 Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz (40651),ASM,AES
    
    Scanning the drive:
    25192 files, 6650099 bytes (6495 KiB)
    
    Creating archive: words.7z
    Add new data to archive: 25192 files, 6650099 bytes (6495 KiB)
                             
    Files read from disk: 25192
    Archive size: 2861795 bytes (2795 KiB)
    Everything is Ok
    

    In the above example we are adding all the files in the dict-words directory to the words.7z archive.

    Extracting Files from an Archive with Native Linux 7Zip (7zz)

    Extracting an archive is very similar. Here we are using the e (extract) command.

    [gbaremmi@putor new-dict]$ 7zz e words.7z 
    
    7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
     compiler: 9.3.0 GCC 9.3.0 64-bit locale=en_US.UTF-8 Utf16=on HugeFiles=on CPUs:4 Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz (40651),ASM,AES
    
    Scanning the drive for archives:
    1 file, 2861795 bytes (2795 KiB)
    
    Extracting archive: words.7z
    --
    Path = words.7z
    Type = 7z
    Physical Size = 2861795
    Headers Size = 186150
    Method = LZMA2:23
    Solid = +
    Blocks = 1
    
    Everything is Ok                    
    
    Files: 25192
    Size:       6650099
    Compressed: 2861795
    

    That's it! We have now installed native 7Zip and used it to create and extract our first archive.

    Resources and Further Reading

    [Mar 12, 2021] Connect computers through WebRTC.

    Mar 12, 2021 | opensource.com

    Snapdrop

    If navigating a network through IP addresses and hostnames is confusing, or if you don't like the idea of opening a folder for sharing and forgetting that it's open for perusal, then you might prefer Snapdrop . This is an open source project that you can run yourself or use the demonstration instance on the internet to connect computers through WebRTC. WebRTC enables peer-to-peer connections through a web browser, meaning that two users on the same network can find each other by navigating to Snapdrop and then communicate with each other directly, without going through an external server.

    snapdrop.jpg

    (Seth Kenlon, CC BY-SA 4.0 )

    Once two or more clients have contacted a Snapdrop service, users can trade files and chat messages back and forth, right over the local network. The transfer is fast, and your data stays local.

    [Mar 12, 2021] 10 Best Compression Tools for Linux - Make Tech Easier

    Mar 12, 2021 | www.maketecheasier.com

    10 Best Compression Tools for Linux By Rubaiat Hossain / Mar 8, 2021 / Linux

    File compression is an integral part of system administration. Finding the best compression method requires significant determination. Luckily, there are many robust compression tools for Linux that make backing up system data easier. Here, we present ten of the best Linux compression tools that can be useful to enterprises and users in this regard.

    1. LZ4

    LZ4 is the compression tool of choice for admins who need lightning-fast compression and decompression speed. It utilizes the LZ4 lossless algorithm, which belongs to the family of LZ77 byte-oriented compression algorithms. Moreover, LZ4 comes coupled with a high-speed decoder, making it one of the best Linux compression tools for enterprises.

    https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=7844251182&adk=581813558&adf=2864412812&pi=t.ma~as.7844251182&w=700&fwrn=4&lmt=1615556198&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fbest-compression-tools-linux%2F&flash=0&wgl=1&dt=1615567005169&bpp=15&bdt=386&idt=194&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=0x0&nras=1&correlator=8182365181793&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1615567006&ga_hid=671449088&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=1641&biw=1519&bih=762&scr_x=0&scr_y=0&eid=31060287%2C44735931%2C44736524%2C21068083&oid=3&pvsid=2471147392406435&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=tvttwMBYDr&p=https%3A//www.maketecheasier.com&dtd=673 2. Zstandard

    Zstandard is another fast compression tool for Linux that can be used for personal and enterprise projects. It's backed by Facebook and offers excellent compression ratios. Some of its most compelling features include the adaptive mode, which can control compression ratios based on I/O, the ability to trade speed for better compression, and the dictionary compression scheme. Zstandard also has a rich API with keybindings for all major programming languages.

    3. lzop

    lzop is a robust compression tool that utilizes the Lempel-Ziv-Oberhumer(LZO) compression algorithm. It provides breakneck compression speed by trading compression ratios. For example, it produces slightly larger files compared to gzip but requires only 10 percenr CPU runtime. Moreover, lzop can deal with system backups in multiple ways, including backup mode, single file mode, archive mode, and pipe mode.

    4. Gzip

    Gzip is certainly one of the most widely used compression tools for Linux admins. It is compatible with every GNU software, making it the perfect compression tool for remote engineers. Gzip leverages the Lempel-Ziv coding in deflate mode for file compression. It can reduce the size of source codes by up to 90 percent. Overall, this is an excellent choice for seasoned Linux users as well as software developers.

    5. bzip2

    bzip2 , a free compression tool for Linux, compresses files using the Burrows-Wheeler block-sorting compression algorithm and Huffman coding. It also supports several additional compression methods, such as run-length encoding, delta encoding, sparse bit array, and Huffman tables. It can also recover data from media drives in some cases. Overall, bzip2 is a suitable compression tool for everyday usage due to its robust compression abilities and fast decompression speed.

    6. p7zip

    p7zip is the port of 7-zip's command-line utility. It is a high-performance archiving tool with solid compression ratios and support for many popular formats, including tar, xz, gzip, bzip2, and zip. It uses the 7z format by default, which provides 30 to 50 percent better compression than standard zip compression . Moreover, you can use this tool for creating self-extracting and dynamically-sized volume archives.

    7. pigz

    pigz or parallel implementation of gzip is a reliable replacement for the gzip compression tool. It leverages multiple CPU cores to increase the compression speed dramatically. It utilizes the zlib and pthread libraries for implementing the multi-threading compression process. However, pigz can't decompress archives in parallel. Hence, you will not be able to get similar speeds during compression and decompression.

    8. pixz

    pixz is a parallel implementation of the XZ compressor with support for data indexing. Instead of producing one big block of compressed data like xz, it creates a set of smaller blocks. This makes randomly accessing the original data straightforward. Moreover, pixz also makes sure that the file permissions are preserved the way they were during compression and decompression.

    https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=8434584656&adk=2343021443&adf=2291700584&pi=t.ma~as.8434584656&w=700&fwrn=4&lmt=1615556198&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fbest-compression-tools-linux%2F&flash=0&wgl=1&dt=1615567005169&bpp=7&bdt=385&idt=230&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=0x0%2C700x175&nras=1&correlator=8182365181793&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1615567006&ga_hid=671449088&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=6557&biw=1519&bih=762&scr_x=0&scr_y=0&eid=31060287%2C44735931%2C44736524%2C21068083&oid=3&pvsid=2471147392406435&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=uL5LnxBkJZ&p=https%3A//www.maketecheasier.com&dtd=686 9. plzip

    plzip is a lossless data compressor tool that makes creative use of the multi-threading capabilities supported by modern CPUs. It is built on top of the lzlib library and provides a command-line interface similar to gzip and bzip2. One key benefit of plzip is its ability to fully leverage multiprocessor machines. plzip definitely warrants a try for admins who need a high-performance Linux compression tool to support parallel compression.

    10. XZ Utils

    XZ Utils is a suite of compression tools for Linux that can compress and decompress .xz and .lzma files. It primarily uses the LZMA2 algorithm for compression and can perform integrity checks of compressed data at ease. Since this tool is available to popular Linux distributions by default, it can be a viable choice for compression in many situations.

    Wrapping Up

    A plethora of reliable Linux compression tools makes it easy to archive and back up essential data . You can choose from many lossless compressors with high compression ratios such as LZ4, lzop, and bzip2. On the other hand, tools like Zstandard and plzip allow for more advanced compression workflows.

    [Mar 12, 2021] How to measure elapsed time in bash by Dan Nanni

    Mar 09, 2021 | www.xmodulo.com
    When you call date with +%s option, it shows the current system clock in seconds since 1970-01-01 00:00:00 UTC. Thus, with this option, you can easily calculate time difference in seconds between two clock measurements.
    start_time=$(date +%s)
    # perform a task
    end_time=$(date +%s)
    
    # elapsed time with second resolution
    elapsed=$(( end_time - start_time ))
    

    Another (preferred) way to measure elapsed time in seconds in bash is to use a built-in bash variable called SECONDS . When you access SECONDS variable in a bash shell, it returns the number of seconds that have passed so far since the current shell was launched. Since this method does not require running the external date command in a subshell, it is a more elegant solution.

    start_time=$SECONDS
    sleep 5
    elapsed=$(( SECONDS - start_time ))
    echo $elapsed
    

    This will display elapsed time in terms of the number of seconds. If you want a more human-readable format, you can convert $elapsed output as follows.

    eval "echo Elapsed time: $(date -ud "@$elapsed" +'$((%s/3600/24)) days %H hr %M min %S sec')"
    

    This will produce output like the following.

    Elapsed time: 0 days 13 hr 53 min 20 sec
    

    [Mar 07, 2021] A brief introduction to Ansible roles for Linux system administration by Shiwani Biradar

    Jan 26, 2021 | www.redhat.com

    Nodes

    In Ansible architecture, you have a controller node and managed nodes. Ansible is installed on only the controller node. It's an agentless tool and doesn't need to be installed on the managed nodes. Controller and managed nodes are connected using the SSH protocol. All tasks are written into a "playbook" using the YAML language. Each playbook can contain multiple plays, which contain tasks , and tasks contain modules . Modules are reusable standalone scripts that manage some aspect of a system's behavior. Ansible modules are also known as task plugins or library plugins.

    More about automation Roles

    Playbooks for complex tasks can become lengthy and therefore difficult to read and understand. The solution to this problem is Ansible roles . Using roles, you can break long playbooks into multiple files making each playbook simple to read and understand. Roles are a collection of templates, files, variables, modules, and tasks. The primary purpose behind roles is to reuse Ansible code. DevOps engineers and sysadmins should always try to reuse their code. An Ansible role can contain multiple playbooks. It can easily reuse code written by anyone if the role is suitable for a given case. For example, you could write a playbook for Apache hosting and then reuse this code by changing the content of index.html to alter options for some other application or service.

    The following is an overview of the Ansible role structure. It consists of many subdirectories, such as:

    |-- README.md
    |-- defaults
    |-------main.yml
    |-- files
    |-- handlers
    |-------main.yml
    |-- meta
    |-------main.yml
    |-- tasks
    |-------main.yml
    |-- templates
    |-- tests
    |-------inventory
    |-- vars
    |-------main.yml
    

    Initially, all files are created empty by using the ansible-galaxy command. So, depending on the task, you can use these directories. For example, the vars directory stores variables. In the tasks directory, you have main.yml , which is the main playbook. The templates directory is for storing Jinja templates. The handlers directory is for storing handlers.

    Advantages of Ansible roles:

    Ansible roles are structured directories containing sub-directories.

    But did you know that Red Hat Enterprise Linux also provides some Ansible System Roles to manage operating system tasks?

    System roles

    The rhel-system-roles package is available in the Extras (EPEL) channel. The rhel-system-roles package is used to configure RHEL hosts. There are seven default rhel-system-roles available:

    The rhel-system-roles package is derived from open source Linux system-roles . This Linux-system-role is available on Ansible Galaxy. The rhel-system-roles is supported by Red Hat, so you can think of this as if rhel-system-roles are downstream of Linux system-roles. To install rhel-system-roles on your machine, use:

    $ sudo yum -y install rhel-system-roles
    or
    $ sudo dnf -y install rhel-system-roles
    

    These roles are located in the /usr/share/ansible/roles/ directory.

    Great DevOps Downloads

    This is the default path, so whenever you use playbooks to reference these roles, you don't need to explicitly include the absolute path. You can also refer to the documentation for using Ansible roles. The path for the documentation is /usr/share/doc/rhel-system-roles

    The documentation directory for each role has detailed information about that role. For example, the README.md file is an example of that role, etc. The documentation is self-explanatory.

    The following is an example of a role.

    Example

    If you want to change the SELinux mode of the localhost machine or any host machine, then use the system roles. For this task, use rhel-system-roles.selinux

    For this task the ansible-playbook looks like this:

    ---
    
    - name: a playbook for SELinux mode
     hosts: localhost
     roles:
    
    - rhel-system-roles.selinux
     vars:
    
    - selinux_state: disabled
    

    After running the playbook, you can verify whether the SELinux mode changed or not.

    [ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat . ]

    Shiwani Biradar I am an OpenSource Enthusiastic undergraduate girl who is passionate about Linux &amp; open source technologies. I have knowledge of Linux , DevOps, and cloud. I am also an active contributor to Fedora. If you didn't find me exploring technologies then you will find me exploring food! More about me

    [Mar 05, 2021] Edge servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center

    Mar 05, 2021 | opensource.com

    ... Edge computing is a model of infrastructure design that places many "compute nodes" (a fancy word for a server ) geographically closer to people who use them most frequently. It can be part of the open hybrid-cloud model, in which a centralized data center exists to do all the heavy lifting but is bolstered by smaller regional servers to perform high frequency -- but usually less demanding -- tasks...

    Historically, a computer was a room-sized device hidden away in the bowels of a university or corporate head office. Client terminals in labs would connect to the computer and make requests for processing. It was a centralized system with access points scattered around the premises. As modern networked computing has evolved, this model has been mirrored unexpectedly. There are centralized data centers to provide serious processing power, with client computers scattered around so that users can connect. However, the centralized model makes less and less sense as demands for processing power and speed are ramping up, so the data centers are being augmented with distributed servers placed on the "edge" of the network, closer to the users who need them.

    The "edge" of a network is partly an imaginary place because network boundaries don't exactly map to physical space. However, servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center.

    ... ... ...

    While it's not exclusive to Linux, container technology is an important part of cloud and edge computing. Getting to know Linux and Linux containers helps you learn to install, modify, and maintain "serverless" applications. As processing demands increase, it's more important to understand containers, Kubernetes and KubeEdge , pods, and other tools that are key to load balancing and reliability.

    ... ... ...

    The cloud is largely a Linux platform. While there are great layers of abstraction, such as Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit from a healthy dose of Linux knowledge. The best way to learn it is to use it, and Linux is remarkably easy to try . Get the edge on Linux so you can get Linux on the edge.

    [Mar 04, 2021] Tips for using screen - Enable Sysadmin

    Mar 04, 2021 | www.redhat.com

    Rather than trying to limit yourself to just one session or remembering what is running on which screen, you can set a name for the session by using the -S argument:

    [root@rhel7dev ~]# screen -S "db upgrade"
    [detached from 25778.db upgrade]
    
    [root@rhel7dev ~]# screen -ls
    There are screens on:
        25778.db upgrade    (Detached)
        25706.pts-0.rhel7dev    (Detached)
        25693.pts-0.rhel7dev    (Detached)
        25665.pts-0.rhel7dev    (Detached)
    4 Sockets in /var/run/screen/S-root.
    
    [root@rhel7dev ~]# screen -x "db upgrade"
    [detached from 25778.db upgrade]
    
    [root@rhel7dev ~]#
    

    To exit a screen session, you can type exit or hit Ctrl+A and then D .

    Now that you know how to start, stop, and label screen sessions let's get a little more in-depth. To split your screen session in half vertically hit Ctrl+A and then the | key ( Shift+Backslash ). At this point, you'll have your screen session with the prompt on the left:

    Image

    To switch to your screen on the right, hit Ctrl+A and then the Tab key. Your cursor is now in the right session, but there's no prompt. To get a prompt hit Ctrl+A and then C . I can do this multiple times to get multiple vertical splits to the screen:

    Image

    You can now toggle back and forth between the two screen panes by using Ctrl+A+Tab .

    What happens when you cat out a file that's larger than your console can display and so some content scrolls past? To scroll back in the buffer, hit Ctrl+A and then Esc . You'll now be able to use the cursor keys to move around the screen and go back in the buffer.

    There are other options for screen , so to see them, hit Ctrl , then A , then the question mark :

    Image

    [ Free online course: Red Hat Enterprise Linux technical overview . ]

    Further reading can be found in the man page for screen . This article is a quick introduction to using the screen command so that a disconnected remote session does not end up killing a process accidentally. Another program that is similar to screen is tmux and you can read about tmux in this article .

    [Mar 03, 2021] How to move /var directory to another partition

    Mar 03, 2021 | linuxconfig.org

    How to move /var directory to another partition

    System Administration
    18 November 2020

    me title=

    /var directory has filled up and you are left with with no free disk space available. This is a typical scenario which can be easily fixed by mounting your /var directory on different partition. Let's get started by attaching new storage, partitioning and creating a desired file system. The exact steps may vary and are not part of this config article. Once ready obtain partition UUID of your new var partition eg. /dev/sdc1:
    # blkid | grep sdc1
    /dev/sdc1: UUID="1de46881-1f49-440e-89dd-6c32592491a7" TYPE="ext4" PARTUUID="652a2fee-01"
    
    Create a new mount point and mount your new partition:
    # mkdir /mnt/newvar
    # mount /dev/sdc1 /mnt/newvar
    
    Confirm that it is mounted. Note, your output will be different:
    # df -h /mnt/newvar
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdc1       1.8T  1.6T  279G  85% /mnt/newvar
    
    Copy current /var data to the new location:
    # rsync -aqxP /var/* /mnt/newvar
    
    Unmount new partition:
    # umount /mnt/newvar/  /mnt/var/
    
    Edit your /etc/fstab to include new partition and choosing a relevant file-system:
    UUID=1de46881-1f49-440e-89dd-6c32592491a7 /var        ext4    defaults        0       2
    
    Reboot your system and you are done. Confirm that everything is working correctly and optionally remove old var directory by booting to some Live Linux system etc.

    [Mar 03, 2021] partitioning - How to move boot and root partitions to another drive - Ask Ubuntu

    Mar 03, 2021 | askubuntu.com

    How to move boot and root partitions to another drive Ask Question Asked 10 years, 6 months ago Active 1 year, 7 months ago Viewed 80k times


    mlissner ,

    34 20

    I have two drives on my computer that have the following configuration:

    Drive 1: 160GB, /home
    Drive 2: 40GB, /boot and /
    

    Unfortunately, drive 2 seems to be dying, because trying to write to it is giving me errors, and checking out the SMART settings shows a sad state of affairs.

    I have plenty of space on Drive 1, so what I'd like to do is move the / and /boot partitions to it, remove Drive 2 from the system, replace Drive 2 with a new drive, then reverse the process.

    I imagine I need to do some updating to grub, and I need to move some things around, but I'm pretty baffled how to exactly go about this. Since this is my main computer, I want to be careful not to mess things up so I can't boot. partitioning fstab Share Improve this question Follow asked Sep 1 '10 at 0:56 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

    Lucas ,

    This is exactly what I had to do as well. I wrote a blog with full instructions on how to move root partition / to /home.Lucas Sep 17 '18 at 15:12

    maco ,

    31

    You'll need to boot from a live cd. Add partitions for them to disk 1, copy all the contents over, and then use sudo blkid to get the UUID of each partition. On disk 1's new /, edit the /etc/fstab to use the new UUIDs you just looked up.

    Updating GRUB depends on whether it's GRUB1 or GRUB2. If GRUB1, you need to edit /boot/grub/device.map

    If GRUB2, I think you need to mount your partitions as they would be in a real situation. For example:

    sudo mkdir /media/root
    sudo mount /dev/sda1 /media/root
    sudo mount /dev/sda2 /media/root/boot
    sudo mount /dev/sda3 /media/root/home
    

    (Filling in whatever the actual partitions are that you copied things to, of course)

    Then bind mount /proc and /dev in the /media/root:

    sudo mount -B /proc /media/root/proc
    sudo mount -B /dev /media/root/dev
    sudo mount -B /sys /media/root/sys
    

    Now chroot into the drive so you can force GRUB to update itself according to the new layout:

    sudo chroot /media/root
    sudo update-grub
    

    The second command will make one complaint (I forget what it is though...), but that's ok to ignore.

    Test it by removing the bad drive. If it doesn't work, the bad drive should still be able to boot the system, but I believe these are all the necessary steps. Share Improve this answer Follow edited Jun 15 '14 at 23:04 Matthew Buckett 105 4 4 bronze badges answered Sep 1 '10 at 6:14 maco 14.4k 3 3 gold badges 27 27 silver badges 35 35 bronze badges

    William Mortada ,

    FYI to anyone viewing this these days, this does not apply to EFI setups. You need to mount /media/root/boot/efi , among other things. – wjandrea Sep 10 '16 at 7:54

    sBlatt ,

    6

    If you replace the drive right away you can use dd (tried it on my server some months ago, and it worked like a charm).

    You'll need a boot-CD for this as well.

    1. Start boot-CD
    2. Only mount Drive 1
    3. Run dd if=/dev/sdb1 of=/media/drive1/backuproot.img - sdb1 being your root ( / ) partition. This will save the whole partition in a file.
      • same for /boot
    4. Power off, replace disk, power on
    5. Run dd if=/media/drive1/backuproot.img of=/dev/sdb1 - write it back.
      • same for /boot

    The above will create 2 partitions with the exact same size as they had before. You might need to adjust grub (check macos post).

    If you want to resize your partitions (as i did):

    1. Create 2 Partitions on the new drive (for / and /boot ; size whatever you want)
    2. Mount the backup-image: mount /media/drive1/backuproot.img /media/backuproot/
    3. Mount the empty / partition: mount /dev/sdb1 /media/sdb1/
    4. Copy its contents to the new partition (i'm unsure about this command, it's really important to preserve ownership, cp -R won't do it!) cp -R --preserve=all /media/backuproot/* /media/sdb1
      • same for /boot/

    This should do it. Share Improve this answer Follow edited Sep 10 '16 at 1:59 wjandrea 12.2k 4 4 gold badges 39 39 silver badges 83 83 bronze badges answered Sep 1 '10 at 9:53 sBlatt 3,849 2 2 gold badges 18 18 silver badges 19 19 bronze badges

    > ,

    It turns out that the new "40GB" drive I'm trying to install is smaller than my current "40GB" drive. I have both of them connected, and I'm booted into a liveCD. Is there an easy way to just dd from the old one to the new one, and call it a done deal? – mlissner Sep 4 '10 at 3:02

    mlissner ,

    6

    My final solution to this was a combination of a number of techniques:

    1. I connected the dying drive and its replacement to the computer simultaneously.
    2. The new drive was smaller than the old, so I shrank the partitions on the old using GParted.
    3. After doing that, I copied the partitions on the old drive, and pasted them on the new (also using GParted).
    4. Next, I added the boot flag to the correct partition on the new drive, so it was effectively a mirror of the old drive.

    This all worked well, but I needed to update grub2 per the instructions here .

    After all this was done, things seem to work. Share Improve this answer Follow edited Jul 16 '19 at 23:35 Pablo Bianchi 7,787 3 3 gold badges 41 41 silver badges 76 76 bronze badges answered Sep 4 '10 at 8:35 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

    j.karlsson ,

    Finally, this solved it for me. I had a Virtualbox disk (vdi file) that I needed to move to a smaller disk. However Virtualbox does not support shrinking a vdi file, so I had to create a new virtual disk and copy over the linux installation onto this new disk. I've spent two days trying to get it to boot. – j.karlsson Dec 19 '19 at 9:48

    [Mar 03, 2021] How to Migrate the Root Filesystem to a New Disk - Support - SUSE

    Mar 03, 2021 | www.suse.com

    How to Migrate the Root Filesystem to a New Disk

    This document (7018639) is provided subject to the disclaimer at the end of this document.

    Environment SLE 11
    SLE 12
    Situation The root filesystem needs to be moved to a new disk or partition. Resolution 1. Use the media to go into rescue mode on the system. This is the safest way to copy data from the root disk so that it's not changing while we are copying from it. Make sure the new disk is available.

    2. Copy data at the block(a) or filesystem(b) level depending on preference from the old disk to the new disk.
    NOTE: If the dd command is not being used to copy data from an entire disk to an entire disk the partition(s) will need to be created prior to this step on the new disk so that the data can copied from partition to partition.

    a. Here is a dd command for copying at the block level (the disks do not need to be mounted):
    # dd if=/dev/<old root disk> of=/dev/<new root disk> bs=64k conv=noerror,sync

    The dd command is not verbose and depending on the size of the disk could take some time to complete. While it is running the command will look like it is just hanging. If needed, to verify it is still running, use the ps command on another terminal window to find the dd command's process ID and use strace to follow that PID and make sure there is activity.
    # ps aux | grep dd
    # strace -p<process id>

    After confirming activity, hit CTRL + c to end the strace command. Once the dd command is complete the terminal prompt will return allowing for new commands to be run.

    b. Alternatively to dd, mount the disks and then use an rsync command for copying at the filesystem level:
    # mount /dev/<old root disk> /mnt
    # mkdir /mnt2
    (If the new disk's root partition doesn't have a filesystem yet, create it now.)
    # mount /dev/<new root disk> /mnt2
    # rsync -zahP /mnt/ /mnt2/

    This command is much more verbose than dd and there shouldn't be any issues telling that it is working. This does generally take longer than the dd command.

    3. Setting up the partition boot label with either fdisk(a) or parted(b)
    NOTE: This step can be skipped if the boot partition is separate from the root partition and has not changed. Also, if dd was used on an entire disk to an entire disk in section "a" of step 2 you can still skip this step since the partition table will have been copied to the new disk (If the partitions are not showing as available yet on the new disk run "partprobe" or enter fdisk and save no changes. ). This exception does not include using dd on only a partition.

    a. Using fdisk to label the new root partition (which contains boot) as bootable.
    # fdisk /dev/<new root disk>

    From the fdisk shell type 'p' to list and verify the root partition is there.
    Command (m for help): p
    If the "Boot" column of the root partition does not have an "*" symbol then it needs to be activated. Type 'a' to toggle the bootable partition flag: Command (m for help): a Partition number (1-4): <number from output p for root partition>

    After that use the 'p' command to verify the bootable flag is now enabled. Finally, save changes: Command (m for help): w

    b. Alternatively to fdisk, use parted to label the new root partition (which contains boot) as bootable.
    # parted /dev/sda

    From the parted shell type "print" to list and verify the root partition is there.
    (parted) print If the "Flags" column of the root partition doesn't include "boot" then it will need to be enabled. (parted) set <root partition number> boot on

    After that use the "print" command again to verify the flag is now listed for the root partition. then exit parted to save the changes: (parted) quit

    4. Updating Legacy GRUB(a) on SLE11 or GRUB2(b) on SLE12.
    NOTE: Steps 4 through 6 will need to be done in a chroot environment on the new root disk. TID7018126 covers how to chroot in rescue mode: https://www.suse.com/support/kb/doc?id=7018126

    a. Updating Legacy GRUB on SLE11
    # vim /boot/grub/menu.lst

    There are two changes that may need to occur in the menu.lst file. 1. If the contents of /boot are in the root partition which is being changed, we'll need to update the line "root (hd#,#)" which points to the disk with the contents of /boot.

    Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot: # mount

    For this line Grub uses "hd[0-9]" rather than "sd[a-z]" so sda would be hd0 and sdb would be hd1, and so on. Match to the disk as shown in the mount command within chroot. The partition number in Legacy Grub also starts at 0. So if it were sda1 it would be hd0,0 and if it were sdb2 it would be hd1,1. Update that line accordingly.

    2. in the line starting with the word "kernel" (generally just below the root line we just went over) there should be a root=/dev/<old root disk> parameter. That will need to be updated to match the path and device name of the new root partition. root=/dev/disk/by-id/<new root partition> Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.
    Save and exit after making the above changes as needed.
    Next, run this command: # yast2 bootloader
    ( you may get a warning message about the boot loader. This can be ignored.)
    Go to the "Boot Loader Installation" tab with ALT + a. Verify it is set to boot from the correct partition. For example, if the content of /boot is in the root partition then make sure it is set to boot from the root partition. Lastly hit ALT + o so that it will save the configuration. While the YaST2 module is existing it should also install the boot loader.
    b Updating GRUB2 on SLE12 # vim /etc/default/grub

    The parameter to update is the GRUB_CMDLINE_LINUX_DEFAULT. If there is a "root=/dev/<old root disk>" parameter update it so that it is "root=/dev/<new root disk>". If there is no root= parameter in there add it. Each parameter is space separated so make sure there is a space separating it from the other parameters. Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.

    Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot before comparing with by-id or by-path: # mount

    It might look something like this afterward: GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/<partition/disk name> resume=/dev/disk/by-id/<partition/disk name> splash=silent quiet showopts"
    After saving changes to that file run this command to save them to the GRUB2 configuration: # grub2-mkconfig -o /boot/grub2/grub.cfg (You can ignore any errors about lvmetad during the output of the above command.)
    After that run this command on the disk with the root partition. For example, if the root partition is sda2 run this command on sda:
    # grub2-install /dev/<disk of root partition>

    5. Correct the fstab file to match new partition name(s)
    # vim /etc/fstab

    Correct the root (/) partition mount row in the file so that it points to the new disk/partition name. If any other partitions were changed they will need to be updated as well. For example, changed from: /dev/<old root disk> / ext3 defaults 1 1 to: /dev/disk/by-id/<new root disk> / ext3 defaults 1 1

    The 3rd through 6th column may vary from the example. The important aspect is to change the row that is root (/) on the second column and adjust in particular the first column to reflect the new root disk/partition. Save and exit after making needed changes.
    6. Lastly, run the following command to rebuild the ramdisk to match updated information: # mkinitrd

    7. Exit chroot and reboot the system to test if it will boot using the new disk. Make sure to adjust the BIOS boot order so that the new disk is prioritized first. Additional Information The range of environments that can impact the necessary steps to migrate a root filesystem makes it near impossible to cover every case. Some environments could require tweaks in the steps needed to make this migration a success. As always in administration, have backups ready and proceed with caution. Disclaimer

    This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

    [Mar 03, 2021] How to move Linux root partition to another drive quickly - by Dominik Gacek - Medium

    Mar 03, 2021 | medium.com

    How to move Linux root partition to another drive quickly Dominik Gacek

    Dominik Gacek

    Jun 21, 2019 · 4 min read

    There's a bunch of information over internet on how to clone the Linux drives or partitions between other drives and partitions using solution like partclone , clonezilla , partimage , dd or similar, and while most of them are working just fine, they're not always the fastest possible way to achieve the result.

    Today I want to show you another approach that combines most of them, and I am finding it the easiest and fastest of all.

    Assumptions:

    1. You are using GRUB 2 as a boot loader
    2. You have two disks/partitions where a destination one is at least the same size or larger than the original one.

    Let's dive in into action.

    Just "dd" it

    First thing that we h ave to do, is to create a direct copy of our current root partition from our source disk into our target one.

    Before you start, you have to know what are the device names of your drives, to check on that type in:

    sudo fdisk -l
    

    You should see the list of all the disks and partitions inside your system, along with the corresponding device names, most probably something like /dev/sdx where the x will be replaced with proper device letter, in addition to that you'll see all of the partitions for that device prefixed with partition number, so something like /dev/sdx1

    Based on the partition size, device identifier and the file-system, you can say what partitions you'll switch your installation from and which one will be the target one.

    I am assuming here, that you already have the proper destination partition created, but if you do not, you can utilize one of the tools like GParted or similar to create it.

    Once you'll have those identifiers, let's use dd to create a clone, with command similar to.

    sudo dd if=/dev/sdx1 of=/dev/sdy1 bs=64K conv=noerror,sync
    

    Where /dev/sdx1 is your source partition, and /dev/sdy1 is your destination one.

    It's really important to provide the proper devices into if and of arguments, cause otherwise you can overwrite your source disk instead!

    The above process will take a while and once it's finished you should already be able to mount your new partition into the system by using two commands:

    sudo mkdir /mnt/new
    sudo mount /dev/sdy1 /mnt/new
    

    There's also a chance that your device will be mounted automatically but that varies on a Linux distro of choice.

    Once you execute it, if everything went smoothly you should be able to run

    ls -l /mnt/new
    

    And as the outcome you should see all the files from the core partition, being stored in the new location.

    It finishes the first and most important part of the operation.

    Now the tricky part

    We do have our new partition moved into shiny new drive, but the problem that we have, is the fact that since they're the direct clones both of the devices will have the same UUIDs and if we want to load your installation from the new device properly, we'll have to adjust that as well.

    First, execute following command to see the current disk uuid's

    blkid
    

    You'll see all of the partitions with the corresponding UUID.
    Now, if we want to change it we have to first generate a new one using:

    uuidgen
    

    which will generate a brand new UUID for us, then let's copy it result and execute command similar to:

    sudo tune2fs /dev/sdy1 -U cd6ecfb1-05e0-4dd7-89e7-8e78dad1fa0e
    

    where in place of /dev/sdy1 you should provide your target partition device identifier, and in place of -U flag value, you should paste the value generated from uuidgen command.

    Now the last thing to do, is to update our fstab file on new partition so that it'll contain the proper UUID, to do this, let's edit it with.

    sudo vim /etc/fstab
    # or nano or whatever editor of choice
    

    you'll see something similar to the code below inside:

    # /etc/fstab: static file system information.
    #
    # Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    # / was on /dev/sdc1 during installation
    UUID=cd6ecfb1–05e0–4dd7–89e7–8e78dad1fa0e / ext4 errors=remount-ro 0 1
    # /home was on /dev/sdc2 during installation
    UUID=667f98f4–9db1–415b-b326–65d16c528e29 /home ext4 defaults 0 2
    /swapfile none swap sw 0 0
    UUID=7AA7–10F1 /boot/efi vfat defaults 0 1
    

    The bold part is important for us, so what we want to do, is to paste our new UUID replacing the current one specified for the / path.

    And that's almost it

    The last part you have to do is to simply update the grub.

    There are a number of options here, for the brave ones you can edit the /boot/grub/grub.cfg

    Another option is to simply reinstall grub into our new drive with command:

    sudo grub-install /dev/sdx
    

    And if you do not want to bother with editing or reinstalling grub manually, you can simply use the tool called grub-customizer to have a simple and easy GUI for all of those operations.

    Happy partitioning! :)

    [Mar 03, 2021] HDD to SSD cloning on Linux without re-installing - PCsuggest

    Mar 03, 2021 | www.pcsuggest.com

    HDD to SSD cloning on Linux without re-installing

    Updated - March 25, 2020 by Arnab Satapathi

    No doubt the old spinning hard drives are the main bottleneck of any Linux PC. Overall system responsiveness is highly dependent on storage drive performance.

    So, here's how you can clone HDD to SSD without re-installing the existing Linux distro and now be clear about few things.

    Of course it's not the only way to clone linux from HDD to SSD, rather it's exactly what I did after buying a SSD for my laptop.

    This tutorial should work on every Linux distro with a little modification, depending on which distro you're using, I was using Ubuntu.

    Contents

    Hardware setup

    As you're going to copy files from the hard drive to the SSD. So you need to attach the both disk at the same time on your PC/Laptop.

    For desktops, it's easier, as there's always at least 2 SATA ports on the motherboard. You've just have to connect the SSD to any of the free SATA ports and you're done.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=258425468&pi=t.aa~a.1476024706~i.12~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=6&bdt=1482&idt=-M&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0&nras=2&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=1684&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=c5eY9hPQ6Q&p=https%3A//www.pcsuggest.com&dtd=84

    On laptops it's a bit tricky, as there's no free SATA port. If the laptop has a DVD drive, then you could remove it and use a " 2nd hard drive caddy ". ssd caddy sample

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2371715447&pi=t.aa~a.1476024706~i.13~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=2&bdt=1481&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280&nras=3&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=2511&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=LQ6LebZF03&p=https%3A//www.pcsuggest.com&dtd=104

    It could be either 9.5 mm or 12.7 mm. Open up your laptop's DVD drive and get a rough measurement.

    But if you don't want to play around with your DVD drive or there's no DVD at all, use a USB to SATA adapter .

    Preferably a USB 3 adapter for better speed, like this one . However the "caddy" is the best you can do with your laptop.

    Try AmazonPrime for free
    Enjoy free shipping and One-Day delivery, cancel any time.

    You'll need a bootable USB drive for letter steps, booting any live Linux distro of your choice, I used to Ubuntu.

    You could use any method to create it, the dd approach will be the simplest. Here's detailed the tutorials, with MultiBootUSB and here's bootable USB with GRUB .

    Create Partitions on the SSD

    After successfully attaching the SSD, you need to partition it according to it's capacity and your choice. My SSD, SAMSUNG 850 EVO was absolutely blank, might be yours too as well. So, I had to create the partition table before creating disk partitions.

    Now many question arises, likeWhat kind of partition table? How many partitions? Is there any need of a swap partition?

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3945169977&pi=t.aa~a.1476024706~i.22~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822074&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071163&bpp=2&bdt=1493&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762&nras=5&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=3738&biw=1519&bih=762&scr_x=0&scr_y=707&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=4ZCi9IMqCp&p=https%3A//www.pcsuggest.com&dtd=3284

    Well, if your Laptop/PC has a UEFI based BIOS, and want to use the UEFI functionalities, you should use the GPT partition table.

    For a regular desktop use, 2 separate partitions are enough, a root partition and a home . But if you want to boot through UEFI, then you also need to crate a 100 MB or more FAT32 partition.

    I think a 32 GB root partition is just enough, but you've to decide yours depending on future plans. However you can go with as low as 8 GB root partition, if you know what you're doing.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3420926156&pi=t.aa~a.1476024706~i.25~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822093&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071173&bpp=2&bdt=1504&idt=3&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280&nras=6&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=4355&biw=1519&bih=762&scr_x=0&scr_y=1320&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=B28apczroD&p=https%3A//www.pcsuggest.com&dtd=22307

    Of course you don't need a dedicated swap partition, at least what I think. If there's any need of swap in future, you can just create a swap file.

    So, here's how I partitioned the disk. It's formatted with the MBR partition table, a 32 GB root partition and the rest of 256 GB(232.89 GiB) is home . linux hdd to ssd cloning disk partition

    This SSD partitions were created with Gparted on the existing Linux system on the HDD. The SSD was connected to the DVD drive slot with a "Caddy", showing as /dev/sdb here.

    Mount the HDD and SSD partitions

    At the beginning of this step, you need to shutdown your PC and boot to any live Linux distro of your choice from a bootable USB drive.

    The purpose of booting to a live linux session is for copying everything from the old root partition in a more cleaner way. I mean why copy unnecessary files or directories under /dev , /proc , /sys , /var , /tmp ?

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3106139488&pi=t.aa~a.1476024706~i.31~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822113&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071183&bpp=2&bdt=1514&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280&nras=7&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=5575&biw=1519&bih=762&scr_x=0&scr_y=2549&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=6&uci=a!6&btvi=5&fsb=1&xpc=ZmyKd7RTgz&p=https%3A//www.pcsuggest.com&dtd=42797

    And of course you know how to boot from a USB drive, so I'm not going to repeat the same thing. After booting to the live session, you've to mount both the HDD and SSD.

    As I used Ubuntu live, so just opened up the file manager to mount the volumes. At this point you've to be absolutely sure about which are the old and new root and home partitions.

    And if you didn't had any separate /home partition on the HDD previously, then you've to be careful while copying files. As there could be lots of contents that won't fit inside the tiny root volume of the SSD in this case.

    Finally if you don't want to use any graphical tool like file managers to mount the disk partition, then it's even better. An example below, only commands, not much explanation.

    sudo -i    # after booting to the live session
    
    mkdir -p /mnt/{root1,root2,home1,home2}       # Create the directories
    
    mount /dev/sdb1 /mnt/root1/       # mount the root partitions
    mount /dev/sdc1 /mnt/root2/
    
    mount /dev/sdb2 /mnt/home1/       # mount the home partitions
    mount /dev/sdc2 /mnt/home2/
    
    Copy contents from the HDD to SSD

    In this step, we'll be using the rsync command to clone HDD to SSD while preserving proper file permissions . And we'll assume that the all partitions are mounter like below.

    • Old root partition of the hard drive mounted on /media/ubuntu/root/
    • Old home partition of the hard drive on /media/ubuntu/home/
    • New root partition of the SSD, on /media/ubuntu/root1/
    • New home partition of the SSD mounted on /media/ubuntu/home1/

    Actually in my case, both the root and home partitions were labelled as root and home, so udisk2 created the mount directories like above.

    Note: Most probably your mount points are different. Don't just copy paste the commands below, modify them according to your system and requirements.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3865305780&pi=t.aa~a.1476024706~i.41~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822136&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071192&bpp=2&bdt=1521&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280&nras=8&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=7106&biw=1519&bih=762&scr_x=0&scr_y=4066&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=7&uci=a!7&btvi=6&fsb=1&xpc=IxhPgWVhvg&p=https%3A//www.pcsuggest.com&dtd=64891

    First copy the contents of one root partition to another.

    rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/ /media/ubuntu/root1/
    

    You can also see the transfer progress, that's helpful.

    The copying process will take about 10 minutes or so to complete, depending on the size of it's contents.

    Note: If there was no separate home partition on your previous installation and there's not enough space in the SSD's root partition, exclude the /home directory.

    For that, we'll use the rsync command again.

    rsync -axHAWXS --numeric-ids --info=progress2 --exclude={/home} /media/ubuntu/root/ /media/ubuntu/root1/
    

    Now copy the contents of one home partition to another, and this is a bit tricky of your SSD is smaller in size than the HDD. You've to use the --exclude flag with rsync to exclude certain large files or folders.

    So, here for an example , I wanted to exclude few excessively large folders.

    rsync -axHAWXS --numeric-ids --info=progress2 --exclude={home/b00m/OS,home/b00m/Downloads} /media/ubuntu/home/ /media/ubuntu/home1/
    

    Excluding files and folders with rsync is bit sketchy, the source folder is the starting point of any file or directory path. Make sure that the exclude path is properly implemented.

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2709772142&pi=t.aa~a.1476024706~i.52~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822141&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071200&bpp=2&bdt=1530&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280&nras=9&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=8338&biw=1519&bih=762&scr_x=0&scr_y=5324&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=8&uci=a!8&btvi=7&fsb=1&xpc=wefQe5b0bo&p=https%3A//www.pcsuggest.com&dtd=70468

    Note: You need to go through the below step only if you excluded the /home directory while cloning to SSD, as said above.

    rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/home/ /media/ubuntu/home1/
    

    Hope you've got the point, for a proper HDD to SSD cloning in linux, copy the contents of the HDD's root partition to the new SSD's root partition. And do the the same thing for the home partition too.

    Install GRUB bootloader on the SSD

    The SSD won't boot until there's a properly configured bootloader. And there's a very good chance that you'were using GRUB as a boot loader.

    So, to install GRUB, we've to chroot on the root partition of the SSD and install it from there. Before that be sure about which device under the /dev directory is your SSD. In my case, it was /dev/sdb .

    Note: You can just copy the first 512 byte from the HDD and dump it to the SSD, but I'm not going that way this time.

    So, first step is chrooting, here's all the commands below, running all of then as super user.

    sudo -i               # login as super user
    
    mount -o bind /dev/ /media/ubuntu/root1/dev/
    mount -o bind /dev/pts/ /media/ubuntu/root1/dev/pts/ 
    mount -o bind /sys/ /media/ubuntu/root1/sys/
    mount -o bind /proc/ /media/ubuntu/root1/proc/
    
    chroot /media/ubuntu/root1/
    

    https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=1452168868&pi=t.aa~a.1476024706~i.61~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822152&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071209&bpp=1&bdt=1539&idt=1&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280%2C720x280&nras=10&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=9694&biw=1519&bih=762&scr_x=0&scr_y=6666&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg%2CAGkb-H9f9fUn01smqVRP5aEnN31pZNxrDL15Qj0IlmWDPH8p8BIJMRy6cFTja0zNONcUCMw6gUHiQaNqou6aaQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=9&uci=a!9&btvi=8&fsb=1&xpc=GYejSIH3w1&p=https%3A//www.pcsuggest.com&dtd=80956

    After successfully chrooting to the SSD's root partition, install GRUB. And there's also a catch, if you want to use a UEFI compatible GRUB, then it's another long path. But we'll be installing the legacy BIOS version of the GRUB here.

    grub-install /dev/sdb --boot-directory=/boot/ --target=i386-pc
    

    If GRUB is installed without any problem, then update the configuration file.

    update-grub
    

    These two commands above are to be run inside the chroot, and don't exit from the chroot now. Here's the detailed GRUB rescue tutorial, both for legacy BIOS and UEFI systems.

    Update the fstab entry

    You've to properly update the fstab entry to properly mount the filesystems while booting.

    Use the blkid command to know the proper UUID of the partitions. ssd blkid

    Now open up the /etc/fstab file with your favorite text editor and add the proper root and home UUID at proper locations.

    nano /etc/fstab
    

    clone hdd to ssd fstab entryThe above is the final fstab entry from my laptops Ubuntu installation.

    Shutdown and boot from the SSD

    If you were using a USB to SATA converter to do all the above steps, then it's time to connect the SSD to a SATA port.

    For desktops it's not a problem, just connect the SSD to any of it's available SATA port. But many laptop refuses to boot if the DVD drive is replaced with a SSD or HDD. So, in that case, remove the hard drive and slip the SSD in it's place.

    After doing all the hardware stuff, it's better to check if the SSD is recognized by the BIOS/UEFI at all. Hit the BIOS setup button while powering it up, and check all the disks.

    If the SSD is detected, then set it as the default boot device. Save all the changes to BIOS/UEFI and hit the power button again. BIOS boot selection menu

    Now it's the moment of truth, if HDD to SSD cloning was done right, then Linux should boot. It will boot much faster than previous, you can check that with the systemd-analyze command.

    Conclusion

    As said before it's neither the only way nor the perfect, but was pretty simple for me.I got the idea from openwrt extroot setup, but previously used the squashfs tools instead of rsync.

    It took around 20 minute to clone my HDD to SSD. But, well writing this tutorial took around 15X more time of that.

    Hope I'll be able to add the GRUB installation process for UEFI based systems to this tutorial soon, stay tuned !

    Also please don't forget to share your thoughts and suggestions on the comment section. Your comments

    1. Sh3l says

      December 21, 2020

      Hello,
      It seems you haven't gotten around writing that UEFI based article yet. But right now I really need the steps necessary to clone hdd to ssd in UEFI based system. Can you please let me know how to do it? Reply

      • Arnab Satapathi says

        December 22, 2020

        Create an extra UEFI partition, along with root and home partitions, FAT32, 100 to 200 MB, install GRUB in UEFI mode, it should boot.
        Commands should be like this -
        mount /dev/sda2 /boot/efi
        grub-install /dev/sda --target=x86_64-efi

        sda2 is the EFI partition.

        This could be helpful- https://www.pcsuggest.com/grub-rescue-linux/#GRUB_rescue_on_UEFI_systems

        Then edit the grub.cfg file under /boot/grub/ , you're good to go.

        If it's not booting try GRUB rescue, boot and install grub from there. Reply

    2. Pronay Guha says

      November 9, 2020

      I'm already using Kubuntu 20.04, and now I'm trying to add an SSD to my laptop. It is running windows alongside. I want the data to be there but instead of using HDD, the Kubuntu OS should use SSD. How to do it? Reply

    3. none says

      May 23, 2020

      Can you explain what to do if the original HDD has Swap and you don't want it on the SSD?
      Thanks. Reply

      • Arnab Satapathi says

        May 23, 2020

        You can ignore the Swap partition, as it's not essential for booting.

        Edit the /etc/fstab file, and use a swap file instead. Reply

    4. none says

      May 21, 2020

      A couple of problems:
      In one section you mount homeS and rootS as root1 root2 home1 home2 but in the next sectionS you call them root root1 home home1
      In the blkid image sda is SSD and sdb is HDD but you said in the previous paragraph that sdb is your SSD
      Thanks for the guide Reply

      • Arnab Satapathi says

        May 23, 2020

        The first portion is just an example, not the actual commands.

        There's some confusing paragraphs and formatting error, I agree. Reply

    5. oybek says

      April 21, 2020

      Thank you very much for the article
      Yesterday moved linux from hdd to ssd without any problem
      Brilliant article Reply

      • Pronay Guha says

        November 9, 2020

        hey, I'm trying to move Linux from HDD to SSD with windows as a dual boot option.
        What changes should I do? Reply

    6. Passingby says

      March 25, 2020

      Thank you for your article. It was very helpful. But i see one disadvantage. When you copy like cp -a /media/ubuntu/root/ /media/ubuntu/root1/ In root1 will be created root folder, but not all its content separately without folder. To avoid this you must add (*) after /
      It should be looked like cp -a /media/ubuntu/root/* /media/ubuntu/root1/ For my opinion rsync command is much more better. You see like files copping. And when i used cp, i did not understand the process hanged up or not. Reply

    7. David Keith says

      December 8, 2018

      Just a quick note: rsync, scp, cp etc. all seem to have a file size limitation of approximately 100GB. So this tutorial will work well with the average filesystem, but will bomb repeatedly if the file size is extremely large. Reply

    8. oldunixguy says

      June 23, 2018

      Question: If one doesn't need to exclude anything why not use "cp -a" instead of rsync?

      Question: You say "use a UEFI compatible GRUB, then it's another long path" but you don't tell us how to do this for UEFI. How do we do it? Reply

      • Arnab Satapathi says

        June 23, 2018

        1. Yeah, using cp -a is preferable if we don't have to exclude anything.
        2. At the moment of writing, I didn't had any PC/laptop with a UEFI firmware.

        Thanks for the feedback, fixed the first issue. Reply

    9. Alfonso says

      February 8, 2018

      best tutorial ever, thank you! Reply

      • Arnab Satapathi says

        February 8, 2018

        You're most welcome, truly I don't know how to respond such a praise. Thanks! Reply

    10. Emmanuel says

      February 3, 2018

      Far the best tutorial I've found "quickly" searching DuckDuckGo. Planning to migrate my system on early 2018. Thank you! I now visualize quite clearly the different steps I'll have to adapt and pass through. it also stick to the KISS* thank you again, the time you invested is very useful, at least for me!

      Best regards.

      Emmanuel Reply

      • Arnab Satapathi says

        February 3, 2018

        Wow! That's motivating, thanks Emmanuel.

    [Mar 03, 2021] What Is /dev/shm And Its Practical Usage

    Mar 03, 2021 | www.cyberciti.biz

    Author: Vivek Gite Last updated: March 14, 2006 58 comments

    /dev/shm is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things on Linux.

    shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

    https://googleads.g.doubleclick.net/pagead/ads?adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&jar=2021-03-02-22&client=ca-pub-7825705102693166&format=644x320&w=644&h=320&ptt=12&iu=4688727157&adk=1529679259&output=html&bc=7&pv=1&wgl=1&asnt=0-21263681621210495305&dff=system-ui%2C%20BlinkMacSystemFont%2C%20Roboto%2C%20%22Segoe%20UI%22%2C%20Segoe%2C%20%22Helvetica%20Neue%22%2C%20Tahoma%2C%20sans-serif&prev_fmts=1519x320&prev_slotnames=1433529302&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&ifi=2&pfx=0&adf=3590974695&nhd=0&adx=240&ady=936&oid=2&is_amp=5&_v=2102200206004&d_imp=1&c=57473004511&ga_cid=amp-57phOoIfF4DPpaL7S3NtDA&ga_hid=4511&dt=1614816474718&biw=1536&bih=762&u_aw=1536&u_ah=864&u_cd=24&u_w=1536&u_h=864&u_tz=-300&u_his=4&vis=1&scr_x=0&scr_y=0&url=https%3A%2F%2Fwww.cyberciti.biz%2Ftips%2Fwhat-is-devshm-and-its-practical-usage.html&ref=https%3A%2F%2Fduckduckgo.com%2F&bdt=993&dtd=210&__amp_source_origin=https%3A%2F%2Fwww.cyberciti.biz

    If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
    $ df -h
    Sample outputs:

    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/wks01-root
                          444G   70G  351G  17% /
    tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
    udev                  3.9G  332K  3.9G   1% /dev
    tmpfs                 3.9G  168K  3.9G   1% /dev/shm
    /dev/sda1             228M   32M  184M  15% /boot
    
    Nevertheless, where can I use /dev/shm?

    You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your Linux host's performance (i.e. improve the performance of your virtual machines).

    In this example, remount /dev/shm with 8G size as follows:
    # mount -o remount,size=8G /dev/shm
    To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only accessible by root:
    # mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
    Where,

    How do I restrict or modify size of /dev/shm permanently?

    You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root user, enter:
    # vi /etc/fstab
    Append or modify /dev/shm entry as follows to set size to 8G

    none      /dev/shm        tmpfs   defaults,size=8G        0 0
    

    Save and close the file. For the changes to take effect immediately remount /dev/shm:
    # mount -o remount /dev/shm
    Verify the same:
    # df -h

    Recommend readings:

    [Mar 03, 2021] How to move the /root directory

    Mar 03, 2021 | serverfault.com

    https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 2 1

    I would like to move my root user's directory to a larger partition. Sometimes "he" runs out of space when performing tasks.

    Here are my partitions:

    host3:~# df
    Filesystem           1K-blocks      Used Available Use% Mounted on
    /dev/sda1               334460    320649         0 100% /
    tmpfs                   514128         0    514128   0% /lib/init/rw
    udev                     10240       720      9520   8% /dev
    tmpfs                   514128         0    514128   0% /dev/shm
    /dev/sda9            228978900   1534900 215812540   1% /home
    /dev/sda8               381138     10305    351155   3% /tmp
    /dev/sda5              4806904    956852   3605868  21% /usr
    /dev/sda6              2885780   2281584    457608  84% /var
    

    The root user's home directory is /root. I would like to relocate this, and any other user's home directories to a new location, perhaps on sda9. How do I go about this? debian user-management linux Share Improve this question Follow asked Nov 30 '10 at 17:27 nicholas.alipaz 155 2 2 silver badges 7 7 bronze badges

    Add a comment 3 Answers Active Oldest Votes 4

    You should avoid symlinks, it can make nasty bugs to appear... one day. And very hard to debug.

    Use mount --bind :

    # as root
    cp -a /root /home/
    echo "" >> /etc/fstab
    echo "/home/root /root none defaults,bind 0 0" >> /etc/fstab
    
    # do it now
    cd / ; mv /root /root.old; mkdir /root; mount -a
    

    it will be made at every reboots which you should do now if you want to catch errors soon Share Improve this answer Follow answered Nov 30 '10 at 17:51 shellholic 1,257 8 8 silver badges 11 11 bronze badges

    Add a comment

    https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 1

    Never tried it, but you shouldn't have a problem with:
    cd / to make sure you're not in the directory to be moved
    mv /root /home/root
    ln -s /home/root /root symlink it back to the original location. Share Improve this answer Follow answered Nov 30 '10 at 17:32 James L 5,645 1 1 gold badge 17 17 silver badges 23 23 bronze badges Add a comment 0

    Share Improve this answer Follow answered Nov 30 '10 at 17:45 Sergey 2,076 15 15 silver badges 14 14 bronze badges

    [Mar 03, 2021] The dmesg command is used to print the kernel's message buffer.

    Mar 03, 2021 | www.redhat.com

    11 Linux commands I can't live without - Enable Sysadmin

    Command 9: dmesg

    The dmesg command is used to print the kernel's message buffer. This is another important command that you cannot work without. It is much easier to troubleshoot a system when you can see what is going on, and what happened behind the scenes.

    Image

    [Mar 03, 2021] The classic case of "low free disk space"

    Mar 03, 2021 | www.redhat.com

    Originally from: Sysadmin university- Quick and dirty Linux tricks - Enable Sysadmin

    Another example from real life: You are troubleshooting an issue and find out that one file system is at 100 percent of its capacity.

    There may be many subdirectories and files in production, so you may have to come up with some way to classify the "worst directories" because the problem (or solution) could be in one or more.

    In the next example, I will show a very simple scenario to illustrate the point.

    https://asciinema.org/a/dt1WZkdpfCALbQ5XeiJNYxSCS/embed?

    The sequence of steps is:

    1. We go to the file system where the disk space is low (I used my home directory as an example).
    2. Then, we use the command df -k * to show the sizes of directories in kilobytes.
    3. That requires some classification for us to find the big ones, but just sort is not enough because, by default, this command will not treat the numbers as values but just characters.
    4. We add -n to the sort command, which now shows us the biggest directories.
    5. In case we have to navigate to many other directories, creating an alias might be useful.

    [Mar 01, 2021] Serious 10-year-old flaw in Linux sudo command; a new version patches it

    Mar 01, 2021 | www.networkworld.com

    Linux users should immediately patch a serious vulnerability to the sudo command that, if exploited, can allow unprivileged users gain root privileges on the host machine.

    Called Baron Samedit, the flaw has been "hiding in plain sight" for about 10 years, and was discovered earlier this month by researchers at Qualys and reported to sudo developers, who came up with patches Jan. 19, according to a Qualys blog . (The blog includes a video of the flaw being exploited.)

    [Get regularly scheduled insights by signing up for Network World newsletters.]

    A new version of sudo -- sudo v1.9.5p2 -- has been created to patch the problem, and notifications have been posted for many Linux distros including Debian, Fedora, Gentoo, Ubuntu, and SUSE, according to Qualys.

    According to the common vulnerabilities and exposures (CVE) description of Baron Samedit ( CVE-2021-3156 ), the flaw can be exploited "via 'sudoedit -s' and a command-line argument that ends with a single backslash character."

    https://imasdk.googleapis.com/js/core/bridge3.444.1_en.html#goog_1515248305

    According to Qualys, the flaw was introduced in July 2011 and affects legacy versions from 1.8.2 to 1.8.31p2 as well as default configurations of versions from 1.9.0 to 1.9.5p1.

    [Mar 01, 2021] Smart ways to compare files on Linux by Sandra Henry-Stocker

    Feb 16, 2021 | www.networkworld.com

    colordiff

    The colordiff command enhances the differences between two text files by using colors to highlight the differences.

    5 Often-Overlooked Log Sources

    SponsoredPost Sponsored by ReliaQuest

    5 Often-Overlooked Log Sources

    Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

    $ colordiff attendance-2020 attendance-2021
    10,12c10
    < Monroe Landry
    < Jonathan Moody
    < Donnell Moore
    ---
    < Sandra Henry-Stocker
    

    If you add a -u option, those lines that are included in both files will appear in your normal font color.

    wdiff

    The wdiff command uses a different strategy. It highlights the lines that are only in the first or second files using special characters. Those surrounded by square brackets are only in the first file. Those surrounded by braces are only in the second file.

    $ wdiff attendance-2020 attendance-2021
    Alfreda Branch
    Hans Burris
    Felix Burt
    Ray Campos
    Juliet Chan
    Denver Cunningham
    Tristan Day
    Kent Farmer
    Terrie Harrington
    [-Monroe Landry                 <== lines in file 1 start
    Jonathon Moody
    Donnell Moore-]                 <== lines only in file 1 stop
    {+Sandra Henry-Stocker+}        <== line only in file 2
    Leanne Park
    Alfredo Potter
    Felipe Rush
    
    vimdiff

    The vimdiff command takes an entirely different approach. It uses the vim editor to open the files in a side-by-side fashion. It then highlights the lines that are different using background colors and allows you to edit the two files and save each of them separately.

    Unlike the commands described above, it runs on the desktop, not in a terminal window.

    Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

    SponsoredPost Sponsored by Outsystems

    Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

    This webinar will discuss key trends and strategies, identified by Forrester Research, for digital CX and customer self-service in 2021 and beyond. Register now

    On Debian systems, you can install vimdiff with this command:

    $ sudo apt install vim
    

    vimdiff.jpg <=====================

    kompare

    The kompare command, like vimdifff , runs on your desktop. It displays differences between files to be viewed and merged and is often used by programmers to see and manage differences in their code. It can compare files or folders. It's also quite customizable.

    Learn more at kde.org .

    kdiff3

    The kdiff3 tool allows you to compare up to three files and not only see the differences highlighted, but merge the files as you see fit. This tool is often used to manage changes and updates in program code.

    Like vimdiff and kompare , kdiff3 runs on the desktop.

    You can find more information on kdiff3 at sourceforge .

    [Feb 28, 2021] Tagging commands on Linux by Sandra Henry-Stocker

    Nov 20, 2020 | www.networkworld.com

    Tags provide an easy way to associate strings that look like hash tags (e.g., #HOME ) with commands that you run on the command line. Once a tag is established, you can rerun the associated command without having to retype it. Instead, you simply type the tag. The idea is to use tags that are easy to remember for commands that are complex or bothersome to retype.

    Unlike setting up an alias, tags are associated with your command history. For this reason, they only remain available if you keep using them. Once you stop using a tag, it will slowly disappear from your command history file. Of course, for most of us, that means we can type 500 or 1,000 commands before this happens. So, tags are a good way to rerun commands that are going to be useful for some period of time, but not for those that you want to have available permanently.

    To set up a tag, type a command and then add your tag at the end of it. The tag must start with a # sign and should be followed immediately by a string of letters. This keeps the tag from being treated as part of the command itself. Instead, it's handled as a comment but is still included in your command history file. Here's a very simple and not particularly useful example:

    [ Also see Invaluable tips and tricks for troubleshooting Linux . ]
    $ echo "I like tags" #TAG
    

    This particular echo command is now associated with #TAG in your command history. If you use the history command, you'll see it:

    https://imasdk.googleapis.com/js/core/bridge3.444.1_en.html#goog_926521185

    me width=

    $ history | grep TAG
      998  08/11/20 08:28:29 echo "I like tags" #TAG     <==
      999  08/11/20 08:28:34 history | grep TAG
    

    Afterwards, you can rerun the echo command shown by entering !? followed by the tag.

    $ !? #TAG
    echo "I like tags" #TAG
    "I like tags"
    

    The point is that you will likely only want to do this when the command you want to run repeatedly is so complex that it's hard to remember or just annoying to type repeatedly. To list your most recently updated files, for example, you might use a tag #REC (for "recent") and associate it with the appropriate ls command. The command below lists files in your home directory regardless of where you are currently positioned in the file system, lists them in reverse date order, and displays only the five most recently created or changed files.

    $ ls -ltr ~ | tail -5 #REC <== Associate the tag with a command
    drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
    -rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
    -rwx------  1 shs     shs         644 Oct 29 17:29 update_user
    -rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
    -rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg
    $ !? #REC                       <== Run the command that the tag is associated with
    ls -ltr ~ | tail -5 #REC
    drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
    -rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
    -rwx------  1 shs     shs         644 Oct 29 17:29 update_user
    -rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
    -rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg
    

    You can also rerun tagged commands using Ctrl-r (hold Ctrl key and press the "r" key) and then typing your tag (e.g., #REC). In fact, if you are only using one tag, just typing # after Ctrl-r should bring it up for you. The Ctrl-r sequence, like !? , searches through your command history for the string that you enter.

    Tagging locations

    Some people use tags to remember particular file system locations, making it easier to return to directories they"re working in without having to type complete directory paths.

    5 Often-Overlooked Log Sources

    SponsoredPost Sponsored by ReliaQuest

    5 Often-Overlooked Log Sources

    Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

    $ cd /apps/data/stats/2020/11 #NOV
    $ cat stats
    $ cd
    !? #NOV        <== takes you back to /apps/data/stats/2020/11
    

    After using the #NOV tag as shown, whenever you need to move into the directory associated with #NOV , you have a quick way to do so – and one that doesn't require that you think too much about where the data files are stored.

    NOTE: Tags don't need to be in all uppercase letters, though this makes them easier to recognize and unlikely to conflict with any commands or file names that are also in your command history.

    Alternatives to tags

    While tags can be very useful, there are other ways to do the same things that you can do with them.

    To make commands easily repeatable, assign them to aliases.

    Netskope Leadership Shows Where CASB and SWG Are Headed

    BrandPost Sponsored by Netskope

    Netskope Leadership Shows Where CASB and SWG Are Headed

    As the status quo of security inverts from the data center to the user, Cloud Access Security Brokers and Secure Web Gateways increasingly will be the same conversation, not separate technology...

    $ alias recent="ls -ltr ~ | tail -5"
    

    To make multiple commands easily repeatable, turn them into a script.

    #!/bin/bash
    echo "Most recently updated files:"
    ls -ltr ~ | tail -5
    

    To make file system locations easier to navigate to, create symbolic links.

    $ ln -s /apps/data/stats/2020/11 NOV
    

    To rerun recently used commands, use the up arrow key to back up through your command history until you reach the command you want to reuse and then press the enter key.

    You can also rerun recent commands by typing something like "history | tail -20" and then type "!" following by the number to the left of the command you want to rerun (e.g., !999).

    Wrap-up

    Tags are most useful when you need to run complex commands again and again in a limited timeframe. They're easy to set up and they fade away when you stop using them.

    [Feb 28, 2021] Selectively reusing commands on Linux by Sandra Henry-Stocker

    Feb 23, 2021 | www.networkworld.com

    Reuse a command by typing a portion of it

    One easy way to reuse a previously entered command (one that's still on your command history) is to type the beginning of the command. If the bottom of your history buffers looks like this, you could rerun the ps command that's used to count system processes simply by typing just !p .

    Debunking the 3 Biggest Myths Around Cloud Migration

    SponsoredPost Sponsored by Lenovo & Intel

    Debunking the 3 Biggest Myths Around Cloud Migration

    Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how to solve the challenges.

    $ history | tail -7
     1002  21/02/21 18:24:25 alias
     1003  21/02/21 18:25:37 history | more
     1004  21/02/21 18:33:45 ps -ef | grep systemd | wc -l
     1005  21/02/21 18:33:54 ls
     1006  21/02/21 18:34:16 echo "What's next?"
    

    You can also rerun a command by entering a string that was included anywhere within it. For example, you could rerun the ps command shown in the listing above by typing !?sys? The question marks act as string delimiters.

    $ !?sys?
    ps -ef | grep systemd | wc -l
    5
    

    You could rerun the command shown in the listing above by typing !1004 but this would be more trouble if you're not looking at a listing of recent commands.

    Run previous commands with changes

    After the ps command shown above, you could count kworker processes instead of systemd processes by typing ^systemd^kworker^ . This replaces one process name with the other and runs the altered command. As you can see in the commands below, this string substitution allows you to reuse commands when they differ only a little.

    $ ps -ef | grep systemd | awk '{ print $2 }' | wc -l
    5
    $ ^systemd^smbd^
    ps -ef | grep smbd | awk '{ print $2 }' | wc -l
    5
    $ ^smbd^kworker^
    ps -ef | grep kworker | awk '{ print $2 }' | wc -l
    13
    

    The string substitution is also useful if you mistype a command or file name.

    Four Business Leaders Focus on the Future of Work

    BrandPost Sponsored by DocuSign

    Four Business Leaders Focus on the Future of Work

    The pandemic of 2020 threw business into disarray, but provided opportunities to accelerate remote work, collaboration, and digital transformation

    $ sudo ls -l /var/log/samba/corse
    ls: cannot access '/var/log/samba/corse': No such file or directory
    $ ^se^es^
    sudo ls -l /var/log/samba/cores
    total 8
    drwx --  --  -- . 2 root root 4096 Feb 16 10:50 nmbd
    drwx --  --  -- . 2 root root 4096 Feb 16 10:50 smbd
    
    Reach back into history

    You can also reuse commands with a character string that asks, for example, to rerun the command you entered some number of commands earlier. Entering !-11 would rerun the command you typed 11 commands earlier. In the output below, the !-3 reruns the first of the three earlier commands displayed.

    $ ps -ef | wc -l
    132
    $ who
    shs      pts/0        2021-02-21 18:19 (192.168.0.2)
    $ date
    Sun 21 Feb 2021 06:59:09 PM EST
    $ !-3
    ps -ef | wc -l
    133
    
    Reuse command arguments

    Another thing you can do with your command history is reuse arguments that you provided to various commands. For example, the character sequence !:1 represents the first argument provided to the most recently run command, !:2 the second, !:3 the third and so on. !:$ represents the final argument. In this example, the arguments are reversed in the second echo command.

    $ echo be the light
    be the light
    $ echo !:3 !:2 !:1
    echo light the be
    light the be
    $ echo !:3 !:$
    echo light light
    light light
    

    If you want to run a series of commands using the same argument, you could do something like this:

    $ echo nemo
    nemo
    $ id !:1
    id nemo
    uid=1001(nemo) gid=1001(nemo) groups=1001(nemo),16(fish),27(sudo)
    $ df -k /home/!:$
    df -k /home/nemo
    Filesystem     1K-blocks     Used Available Use% Mounted on
    /dev/sdb1      446885824 83472864 340642736  20% /home
    

    Of course, if the argument was a long and complicated string, it might actually save you some time and trouble to use this technique. Please remember this is just an example!

    Wrap-Up

    Simple history command tricks can often save you a lot of trouble by allowing you to reuse rather than retype previously entered commands. Remember, however, that using strings to identify commands will recall only the most recent use of that string and that you can only rerun commands in this way if they are being saved in your history buffer.

    Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

    [Feb 28, 2021] Keep out ahead of shadow IT by Steven A. Lowe

    Sep 28, 2015 | www.networkworld.com

    Shadow IT has been presented as a new threat to IT departments because of the cloud. Not true -- the cloud has simply made it easier for non-IT personnel to acquire and create their own solutions without waiting for IT's permission. Moreover, the cloud has made this means of technical problem-solving more visible, bringing shadow IT into the light. In fact, "shadow IT" is more of a legacy pejorative for what should better be labeled "DIY IT." After all, shadow IT has always been about people solving their own problems with technology.

    Here we take a look at how your organization can best go about leveraging the upside of DIY IT.

    What sends non-IT problem-solvers into the shadows

    The IT department is simply too busy, overworked, understaffed, underutilized, and sometimes even too disinterested to take on every marketing Web application idea or mobile app initiative for field work that comes its way. There are too many strategic initiatives, mission-critical systems, and standards committee meetings, so folks outside IT are often left with little recourse but to invent their own solutions using whatever technical means and expertise they have or can find.

    How can this be a bad thing?

    1. They are sharing critical, private data with the wrong people somehow.
    2. Their data is fundamentally flawed, inaccurate, or out of date.
    3. Their data would be of use to many others, but they don't know it exists.
    4. Their ability to solve their own problems is a threat to IT.

    Because shadow IT practitioners are subject matter experts in their domain, the second drawback is unlikely. The third is an opportunity lost, but that's not scary enough to sweat. The first and fourth are the most likely to instill fear -- with good reason. If something goes wrong with a home-grown shadow IT solution, the IT department will likely be made responsible, even if you didn't know it existed.

    RECOMMENDED WHITEPAPERS

    The wrong response to these fears is to try to eradicate shadow IT. Because if you really want to wipe out shadow IT, you would have to have access to all the network logs, corporate credit card reports, phone bills, ISP bills, and firewall logs, and it would take some effort to identify and block all unauthorized traffic in and out of the corporate network. You would have to rig the network to refuse to connect to unsanctioned devices, as well as block access to websites and cloud services like Gmail, Dropbox, Salesforce, Google apps, Trello, and so on. Simply knowing all you would have to block access to would be a job in itself.

    Everything You Need to Know About Cloud Migration That No One Told You

    SponsoredPost Sponsored by Lenovo & Intel

    Everything You Need to Know About Cloud Migration That No One Told You

    Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how to solve the challenges.

    MORE ON NETWORK WORLD: 26 crazy and scary things the TSA has found on travelers

    Worse, if you clamp down on DIY solutions you become an obstacle, and attempts to solve departmental problems will submerge even further into the shadows -- but it will never go away. The business needs underlying DIY IT are too important.

    The reality is, if you shift your strategy to embrace DIY solutions the right way, people would be able to safely solve their own problems without too much IT involvement and IT would be able to accomplish more for the projects where its expertise and oversight is truly critical.

    Embrace DIY IT

    Seek out shadow IT projects and help them, but above all respect the fact that this problem-solving technique exists. The folks who launch a DIY project are not your enemies; they are your co-workers, trying to solve their own problems, hampered by limited resources and understanding. The IT department may not have many more resources to spread around, but you have an abundance of technical know-how. Sharing that does not deplete it.

    You can find the trail of shadow IT by looking at network logs, scanning email traffic and attachments, and so forth. You must be willing to support these activities, even if you do not like them . Whether or not you like them, they exist, and they likely have good reasons for existing. It doesn't matter if they were not done with your permission or to your specifications. Assume that they are necessary and help them do it right.

    What is SecureX?

    SponsoredPost Sponsored by Cisco

    What is SecureX?

    See how SecureX turns security from a blocker into an enabler.

    Take the lead -- and lead

    IT departments have the expertise to help others select the right technical solution for their needs. I'm not talking about RFPs, vendor/product evaluation meetings, software selection committees -- those are typically time-wasting, ivory-tower circuses that satisfy no one. I'm talking about helping colleagues figure out what it is they truly want and teaching them how to evaluate and select a solution that works for them -- and is compliant with a small set of minimal, relevant standards and policies.

    That expertise could be of enormous benefit to the rest of the company, if only it was shared. An approachable IT department that places a priority on helping people solve their own problems -- instead of expending enormous effort trying to prevent largely unlikely, possibly even imaginary problems -- is what you should be striving for.

    Think of it as being helpful without being intrusive. Sharing your expertise and taking the lead in helping non-IT departments help themselves not only shows consideration for your colleagues' needs, but it also helps solve real problems for real people -- while keeping the IT department informed about the technology choices made throughout the organization. Moreover, it sets up the IT department for success instead of surprises when the inevitable integration and data migration requests appear.

    Plus, it's a heck of a lot cheaper than reinventing the wheel unnecessarily.

    Create policies everyone can live with

    IT is responsible for critical policies concerning the use of devices, networks, access to information, and so on. It is imperative that IT have in place a sane set of policies to safeguard the company from loss, liability, leakage, incomplete/inaccurate data, and security threats both internal and external. But everyone else has to live with these policies, too. If they are too onerous or convoluted or byzantine, they will be ignored.

    Therefore, create policies that respect everyone's concerns and needs, not IT's alone. Here's the central question to ask yourself: Are you protecting the company or merely the status quo?

    Security is a legitimate concern, of course, but most SaaS vendors understand security at least as well as you do, if not better. Being involved in the DIY procurement process (without being a bottleneck or a dictator) lets you ensure that minimal security criteria are met.

    Data integrity is likewise a legitimate concern, but control of company data is largely an illusion. You can make data available or not, but you cannot control how it is used once accessed. Train and trust your people, and verify their activities. You should not and cannot make all decisions for them in advance.

    Regulatory compliance, auditing, traceability, and so on are legitimate concerns, but they do not trump the rights of workers to solve their own problems. All major companies in similar fields are subject to the same regulations as your company. How you choose to comply with those regulations is up to you. The way you've always done it is not the only way, and it's probably not even the best way. Here, investigating what the major players in your field do, especially if they are more modern, efficient, and "cloudy" than you, is a great start.

    The simplest way to manage compliance is to isolate the affected software from the rest of the system, since compliance is more about auditing and accountability than proscriptive processes. The major movers and shakers in the Internet space are all over new technologies, techniques, employee empowerment, and streamlining initiatives. Join them, or eat their dust.

    Champion DIY IT

    Once you have a sensible set of policies in place, it's high time to shine a light on shadow IT -- a celebratory spotlight, that is.

    By championing DIY IT projects, you send a clear message that co-workers have no need to hide how they go about solving their problems. Make your intentions friendly and clear up front: that you are intent on improving business operations, recognizing and rewarding innovators and risk-takers, finding and helping those who need assistance, and promoting good practices for DIY IT. A short memo/email announcing this from a trusted, well-regarded executive is highly recommended.

    Here are a few other ideas in helping you embrace DIY IT:

    DIY IT can be a great benefit to your organization by relieving the load on the IT department and enabling more people to tap technical tools to be more productive in their work -- a win for everyone. But it can't happen without sane and balanced policies, active support from IT, and a companywide awareness that this sort of innovation and initiative is valued.

    [Feb 27, 2021] 3 solid self-review tips for sysadmins by Anthony Critelli

    The most solid tip is not to take this self-review seriously ;-)
    And contrary to Anthony Critilli opinion this is not about "selling yourself". This is about management control of the workforce. In other words, annual performance reviews this is a mechanism for repression.
    Use of corporate bullsh*t is probably the simplest and the most advisable strategy during those exercises. I like the recommendation "Tie your accomplishments to business goals and values" below. Never be frank in such situations.
    Feb 25, 2021 | www.redhat.com

    ... you sell yourself by reminding your management team that you provide a great deal of objective value to the organization and that you deserve to be compensated accordingly. When I say compensation , I don't just mean salary. Compensation means different things to different people: Maybe you really want more pay, extra vacation time, a promotion, or even a lateral move. A well-written self-review can help you achieve these goals, assuming they are available at your current employer.

    ... ... ...

    Tie your accomplishments to business goals and values

    ...It's hard to argue that decreasing user downtime from days to hours isn't a valuable contribution.

    ... ... ...

    ... I select a skill, technology, or area of an environment that I am weak in, and I discuss how I would like to build my knowledge. I might discuss how I want to improve my understanding of Kubernetes as we begin to adopt a containerization strategy, or I might describe how my on-call effectiveness could be improved by deepening my knowledge of a particular legacy environment.

    ... ... ...

    Many of my friends and colleagues don't look forward to review season. They find it distracting and difficult to write a self-review. Often, they don't even know where to begin writing about their work from the previous year.

    [Feb 20, 2021] Improve your productivity with this Linux keyboard tool - Opensource.com

    Feb 20, 2021 | opensource.com

    AutoKey is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing.

    This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive.

    Install and set up AutoKey

    AutoKey is available as a software package on many Linux distributions. The project's installation guide contains directions for many platforms, including building from source. This article uses Fedora as the operating platform.

    AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is QT -based.

    You can install either variant from the command line:

    sudo dnf install autokey-gtk
    

    Once it's installed, run it by using autokey-gtk (or autokey-qt ).

    Explore the interface

    Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI):

    autokey-gtk -c
    

    AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish.

    autokey-defaults.png

    (Matt Bargenquast, CC BY-SA 4.0 )

    The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window.

    The right pane is where the phrases and scripts are built and configured.

    Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login .

    startautokey.png

    (Matt Bargenquast, CC BY-SA 4.0 ) Correct common typos with AutoKey More Linux resources

    Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you.

    Create a new subfolder where you can group all your "typo correction" configurations. Select My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos .

    Create a new phrase in File -> New -> Phrase , and call it "grep."

    Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the Enter phrase contents section (replacing the default "Enter phrase contents" text).

    Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set button next to Abbreviations at the bottom of the UI.

    In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation. Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word.

    [Feb 19, 2021] Installs only security updates via yum

    The simplest way is # yum -y update --security
    For RHEL7 the plugin yum-plugin-security is already a part of yum itself, no need to install anything.
    Jan 16, 2020 | access.redhat.com

    It is now possible to limit yum to install only security updates (as opposed to bug fixes or enhancements) using Red Hat Enterprise Linux 5,6, and 7. To do so, simply install the yum-security plugin:

    For Red Hat Enterprise Linux 7 and 8

    The plugin is already a part of yum itself, no need to install anything.

    For Red Hat Enterprise Linux 5 and 6

    # yum install yum-security
    
    Raw
    # yum list-sec
    
    Raw
    # yum list-security --security
    

    For Red Hat Enterprise Linux 5, 6, 7 and 8

    Raw
    # yum updateinfo info security
    
    Raw
    # yum -y update --security
    

    NOTE: It will install the last version available of any package with at least one security errata thus can install non-security erratas if they provide a more updated version of the package.

    Raw
    # yum update-minimal --security -y
    
    Raw
    # yum update --cve <CVE>
    

    e.g.

    Raw
    # yum update --cve CVE-2008-0947
    

    11 September 2014 5:30 PM R. Hinton Community Leader

    For those seeking to discover what CVEs are addressed in a given existing RPM, try this method that Marc Milgram from Red Hat kindly provided at this discussion .

    1) First download the specific rpm you are interested in.
    2) Use the below command...

    Raw
    $ rpm -qp --changelog openssl-0.9.8e-27.el5_10.4.x86_64.rpm | grep CVE
    - fix CVE-2014-0221 - recursion in DTLS code leading to DoS
    - fix CVE-2014-3505 - doublefree in DTLS packet processing
    - fix CVE-2014-3506 - avoid memory exhaustion in DTLS
    - fix CVE-2014-3508 - fix OID handling to avoid information leak
    - fix CVE-2014-3510 - fix DoS in anonymous (EC)DH handling in DTLS
    - fix for CVE-2014-0224 - SSL/TLS MITM vulnerability
    - fix for CVE-2013-0169 - SSL/TLS CBC timing attack (#907589)
    - fix for CVE-2013-0166 - DoS in OCSP signatures checking (#908052)
      environment variable is set (fixes CVE-2012-4929 #857051)
    - fix for CVE-2012-2333 - improper checking for record length in DTLS (#820686)
    - fix for CVE-2012-2110 - memory corruption in asn1_d2i_read_bio() (#814185)
    - fix for CVE-2012-0884 - MMA weakness in CMS and PKCS#7 code (#802725)
    - fix for CVE-2012-1165 - NULL read dereference on bad MIME headers (#802489)
    - fix for CVE-2011-4108 & CVE-2012-0050 - DTLS plaintext recovery
    - fix for CVE-2011-4109 - double free in policy checks (#771771)
    - fix for CVE-2011-4576 - uninitialized SSL 3.0 padding (#771775)
    - fix for CVE-2011-4619 - SGC restart DoS attack (#771780)
    - fix CVE-2010-4180 - completely disable code for
    - fix CVE-2009-3245 - add missing bn_wexpand return checks (#570924)
    - fix CVE-2010-0433 - do not pass NULL princ to krb5_kt_get_entry which
    - fix CVE-2009-3555 - support the safe renegotiation extension and
    - fix CVE-2009-2409 - drop MD2 algorithm from EVP tables (#510197)
    - fix CVE-2009-4355 - do not leak memory when CRYPTO_cleanup_all_ex_data()
    - fix CVE-2009-1386 CVE-2009-1387 (DTLS DoS problems)
    - fix CVE-2009-1377 CVE-2009-1378 CVE-2009-1379
    - fix CVE-2009-0590 - reject incorrectly encoded ASN.1 strings (#492304)
    - fix CVE-2008-5077 - incorrect checks for malformed signatures (#476671)
    - fix CVE-2007-3108 - side channel attack on private keys (#250581)
    - fix CVE-2007-5135 - off-by-one in SSL_get_shared_ciphers (#309881)
    - fix CVE-2007-4995 - out of order DTLS fragments buffer overflow (#321221)
    - CVE-2006-2940 fix was incorrect (#208744)
    - fix CVE-2006-2937 - mishandled error on ASN.1 parsing (#207276)
    - fix CVE-2006-2940 - parasitic public keys DoS (#207274)
    - fix CVE-2006-3738 - buffer overflow in SSL_get_shared_ciphers (#206940)
    - fix CVE-2006-4343 - sslv2 client DoS (#206940)
    - fix CVE-2006-4339 - prevent attack on PKCS#1 v1.5 signatures (#205180)
    
    11 September 2014 5:34 PM R. Hinton Community Leader

    Additionally,

    If you are interested to see if a given CVE, or list of CVEs are applicable, you can use this method:

    1) get the list of all applicable CVEs from Red Hat you wish,
    - If you wanted to limit the search to a specific rpm such as "openssl", then at that above Red Hat link, you can enter "openssl" and filter out only openssl items, or filter against any other search term
    - Place these into a file, one line after another, such as this limited example:
    NOTE : These CVEs below are from limiting the CVEs to "openssl" in the manner I described above, and the list is not completed, there are plenty more for your date range .

    Raw
    CVE-2014-0016
    CVE-2014-0017
    CVE-2014-0036
    CVE-2014-0041
    ...
    

    2) Keep in mind the information in the article in this page, and run something like the following as root (a "for loop" will work in a bash shell):

    Raw
    [root@yoursystem]# for i in `cat listofcves.txt`;yum update --cve $i;done
    

    And if the cve applies, it will prompt you to take the update, if it does not apply, it will tell you

    Alternatively, I used this "echo n |" prior to the "yum update" exit the yum command with "n" if it found a hit:

    Raw
    [root@yoursystem]# for i in `cat listyoumade.txt`;echo n |yum update --cve $i;done
    

    Then redirect the output to another file to make your determinations.

    7 January 2015 9:54 AM f3792625

    'yum info-sec' actually lists all patches, you need to use 'yum info-sec --security'

    10 February 2016 1:00 PM Rackspace Customer

    How is this the Severity information of RHSA updated populated?

    Specifically the article shows the following output:

    Raw
    # yum updateinfo list
    This system is receiving updates from RHN Classic or RHN Satellite.
    RHSA-2014:0159 Important/Sec. kernel-headers-2.6.32-431.5.1.el6.x86_64
    RHSA-2014:0164 Moderate/Sec.  mysql-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 Moderate/Sec.  mysql-devel-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 Moderate/Sec.  mysql-libs-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 Moderate/Sec.  mysql-server-5.1.73-3.el6_5.x86_64
    RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
    RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64
    

    On all of my systems, the output seems to be missing the severity information:

    Raw
    # yum updateinfo list
    This system is receiving updates from RHN Classic or RHN Satellite.
    RHSA-2014:0159 security       kernel-headers-2.6.32-431.5.1.el6.x86_64
    RHSA-2014:0164 security       mysql-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 security       mysql-devel-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 security       mysql-libs-5.1.73-3.el6_5.x86_64
    RHSA-2014:0164 security       mysql-server-5.1.73-3.el6_5.x86_64
    RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
    RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64
    

    I can't see how to configure it to transform "security" to "Severity/Sec."

    20 September 2016 8:27 AM Walid Shaari

    same in here, what I did was use info-sec with filters, like below: Raw

    test-node# yum info-sec|grep  'Critical:'
      Critical: glibc security and bug fix update
      Critical: samba and samba4 security, bug fix, and enhancement update
      Critical: samba security update
      Critical: samba security update
      Critical: nss and nspr security, bug fix, and enhancement update
      Critical: nss, nss-util, and nspr security update
      Critical: nss-util security update
      Critical: samba4 security update
    
    20 June 2017 1:49 PM b.scalio

    What's annoying is that "yum update --security" shows 20 packages to update for security but when listing the installable errata in Satellite it shows 102 errata available and yet all those errata don't contain the errata.

    20 June 2017 2:05 PM Pavel Moravec

    You might hit https://bugzilla.redhat.com/show_bug.cgi?id=1408508 where metadata generated has empty package list for some errata in some circumstances, causing yum thinks such an errata is not applicable (as no package would be updated by applying that errata).

    I recommend finding out one of the errata that Sat WebUI offers but yum isnt aware of, and (z)grep that errata id within yum cache - if there will be something like:

    Raw
    <pkglist>
      <collection short="">
        <name>rhel-7-server-rpms__7Server__x86_64</name>
      </collection>
    </pkglist>
    

    with no package in it, you hit that bug.

    14 August 2017 1:25 AM PixelDrift.NET Support Community Leader

    I've got an interesting requirement in that a customer wants to only allow updates of packages with attached security errata (to limit unecessary drift/update of the OS platform). ie. restrict, warn or block the use of generic 'yum update' by an admin as it will update all packages.

    There are other approaches which I have currently implemented, including limiting what is made available to the servers through Satellite so yum update doesn't 'see' non security errata.. but I guess what i'm really interested in is limiting (through client config) the inadvertant use "yum update" by an administrator, or redirecting/mapping 'yum update' to 'yum update --security'. I appreciate an admin can work around any restriction, but it's really to limit accidental use of full 'yum update' by well intentioned admins.

    Current approaches are to alias yum, move yum and write a shim in its place (to warn/redirect if yum update is called), or patch the yum package itself (which i'd like to avoid). Any other suggestions appreciated.

    16 January 2018 5:00 PM DSI POMONA

    why not creating a specific content-view for security patch purpose ?

    In that content-view, you create a filter that filters only security updates.

    In your patch management process, you can create a script that change on the fly the content-view of a host (or host-group) then apply security patches, and finally switching back to the original content-view (if you let to the admin the possibility to install additional programms if necessary).

    hope this helps

    12 March 2018 2:25 PM Rackspace Customer IN Newbie 14 points
    15 August 2019 12:12 AM IT Accounts NCVER

    Hi,

    Is it necessary to reboot system after applying security updates ?

    15 August 2019 1:17 AM Marcus West

    If it's a kernel update, you will have to. For other packages, it's recommended as to ensure that you are not still running the old libraries in memory. If you are just patching one particular independent service (ie, http), you can probably get away without a full system reboot.

    More information can be found in the solution Which packages require a system reboot after the update? .

    [Feb 03, 2021] A new userful bussword -- Hyper-converged infrastructure

    Feb 03, 2021 | en.wikipedia.org

    From Wikipedia, the free encyclopedia Jump to navigation Jump to search

    Hyper-converged infrastructure ( HCI ) is a software-defined IT infrastructure that virtualizes all of the elements of conventional " hardware -defined" systems. HCI includes, at a minimum, virtualized computing (a hypervisor ), software-defined storage and virtualized networking ( software-defined networking ). [ citation needed ] HCI typically runs on commercial off-the-shelf (COTS) servers.

    The primary difference between converged infrastructure (CI) and hyper-converged infrastructure is that in HCI, both the storage area network and the underlying storage abstractions are implemented virtually in software (at or via the hypervisor) rather than physically, in hardware. [ citation needed ] Because all of the software-defined elements are implemented within the context of the hypervisor, management of all resources can be federated (shared) across all instances of a hyper-converged infrastructure. Expected benefits [ edit ]

    Hyperconvergence evolves away from discrete, hardware-defined systems that are connected and packaged together toward a purely software-defined environment where all functional elements run on commercial, off-the-shelf (COTS) servers, with the convergence of elements enabled by a hypervisor. [1] [2] HCI infrastructures are usually made up of server systems equipped with Direct-Attached Storage (DAS) . [3] HCI includes the ability to plug and play into a data-center pool of like systems. [4] [5] All physical data-center resources reside on a single administrative platform for both hardware and software layers. [6] Consolidation of all functional elements at the hypervisor level, together with federated management, eliminates traditional data-center inefficiencies and reduces the total cost of ownership (TCO) for data centers. [7] [ need quotation to verify ] [8] [9]

    Potential impact [ edit ]

    The potential impact of the hyper-converged infrastructure is that companies will no longer need to rely on different compute and storage systems, though it is still too early to prove that it can replace storage arrays in all market segments. [10] It is likely to further simplify management and increase resource-utilization rates where it does apply. [11] [12] [13]

    [Feb 02, 2021] A Guide to systemd journal clean up process

    Images removed. See the original for full version.
    Jan 29, 2021 | www.debugpoint.com

    ... ... ...

    The systemd journal Maintenance

    Using the journalctl utility of systemd, you can query these logs, perform various operations on them. For example, viewing the log files from different boots, check for last warnings, errors from a specific process or applications. If you are unaware of these, I would suggest you quickly go through this tutorial "use journalctl to View and Analyze Systemd Logs [With Examples] " before you follow this guide.

    Where are the physical journal log files?

    The systemd's journald daemon collects logs from every boot. That means, it classifies the log files as per the boot.

    The logs are stored as binary in the path /var/log/journal with a folder as machine id.

    For example:

    Screenshot of physical journal file -1
    Screenshot of physical journal files -2

    Also, remember that based on system configuration, runtime journal files are stored at /run/log/journal/ . And these are removed in each boot.

    Can I manually delete the log files?

    You can, but don't do it. Instead, follow the below instructions to clear the log files to free up disk space using journalctl utilities.

    How much disk space is used by systemd log files?

    Open up a terminal and run the below command.

    journalctl --disk-usage

    This should provide you how much is actually used by the log files in your system.

    If you have a graphical desktop environment, you can open the file manager and browse to the path /var/log/journal and check the properties.

    systemd journal clean process

    The effective way of clearing the log files should be done by journald.conf configuration file. Ideally, you should not manually delete the log files even if the journalctl provides utility to do that.

    Let's take a look at how you can delete it manually , then I will explain the configuration changes in journald.conf so that you do not need to manually delete the files from time to time; Instead, the systemd takes care of it automatically based on your configuration.

    Manual delete

    First, you have to flush and rotate the log files. Rotating is a way of marking the current active log files as an archive and create a fresh logfile from this moment. The flush switch asks the journal daemon to flush any log data stored in /run/log/journal/ into /var/log/journal/ , if persistent storage is enabled.

    SEE ALSO: Manage Systemd Services Using systemctl [With Examples]

    Then, after flush and rotate, you need to run journalctl with vacuum-size , vacuum-time , and vacuum-files switches to force systemd to clear the logs.

    Example 1:

    sudo journalctl --flush --rotate
    sudo journalctl --vacuum-time=1s

    The above set of commands removes all archived journal log files until the last second. This effectively clears everything. So, careful while running the command.

    journal clean up example

    After clean up:

    After clean up journal space usage

    You can also provide the following suffixes as per your need following the number.

    Example 2:

    sudo journalctl --flush --rotate
    
    sudo journalctl --vacuum-size=400M
    

    This clears all archived journal log files and retains the last 400MB files. Remember this switch applies to only archived log files only, not on active journal files. You can also use suffixes as below.

    Example 3:

    sudo journalctl --flush --rotate
    sudo journalctl --vacuum-files=2

    The vacuum-files switch clears all the journal files below the number specified. So, in the above example, only the last 2 journal files are kept and everything else is removed. Again, this only works on the archived files.

    You can combine the switches if you want, but I would recommend not to. However, make sure to run with --rotate switch first.

    Automatic delete using config files

    While the above methods are good and easy to use, but it is recommended that you control the journal log file cleanup process using the journald configuration files which present at /etc/systemd/journald.conf .

    The systemd provides many parameters for you to effectively manage the log files. By combining these parameters you can effectively limit the disk space used by the journal files. Let's take a look.

    journald.conf parameter Description Example
    SystemMaxUse Specifies the maximum disk space that can be used by the journal in persistent storage SystemMaxUse=500M
    SystemKeepFree Specifies the amount of space that the journal should leave free when adding journal entries to persistent storage. SystemKeepFree=100M
    SystemMaxFileSize Controls how large individual journal files can grow to in persistent storage before being rotated. SystemMaxFileSize=100M
    RuntimeMaxUse Specifies the maximum disk space that can be used in volatile storage (within the /run filesystem). RuntimeMaxUse=100M
    RuntimeKeepFree Specifies the amount of space to be set aside for other uses when writing data to volatile storage (within the /run filesystem). RuntimeMaxUse=100M
    RuntimeMaxFileSize Specifies the amount of space that an individual journal file can take up in volatile storage (within the /run filesystem) before being rotated. RuntimeMaxFileSize=200M

    If you add these values in a running system in /etc/systemd/journald.conf file, then you have to restart the journald after updating the file. To restart use the following command.

    sudo systemctl restart systemd-journald
    Verification of log files

    It is wiser to check the integrity of the log files after you clean up the files. To do that run the below command. The command shows the PASS, FAIL against the journal file.

    journalctl --verify

    ... ... ...

    [Feb 02, 2021] 5 Most Notable Open Source Centralized Log Management Tools, by James Kiarie

    Feb 01, 2021 | www.tecmint.com

    ... ... ...

    1. Elastic Stack ( Elasticsearch Logstash & Kibana)

    Elastic Stack , commonly abbreviated as ELK , is a popular three-in-one log centralization, parsing, and visualization tool that centralizes large sets of data and logs from multiple servers into one server.

    ELK stack comprises 3 different products:

    Logstash

    Logstash is a free and open-source data pipeline that collects logs and events data and even processes and transforms the data to the desired output. Data is sent to logstash from remote servers using agents called ' beats '. The ' beats ' ship a huge volume of system metrics and logs to Logstash whereupon they are processed. It then feeds the data to Elasticsearch .

    Elasticsearch

    Built on Apache Lucene , Elasticsearch is an open-source and distributed search and analytics engine for nearly all types of data both structured and unstructured. This includes textual, numerical, and geospatial data.

    It was first released in 2010. Elasticsearch is the central component of the ELK stack and is renowned for its speed, scalability, and REST APIs. It stores, indexes, and analyzes huge volumes of data passed on from Logstash .

    Kibana

    Data is finally passed on to Kibana , which is a WebUI visualization platform that runs alongside Elasticsearch . Kibana allows you to explore and visualize time-series data and logs from elasticsearch. It visualizes data and logs on intuitive dashboards which take various forms such as bar graphs, pie charts, histograms, etc.

    Related Read : How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 8/7

    2. Graylog

    Graylog is yet another popular and powerful centralized log management tool that comes with both open-source and enterprise plans. It accepts data from clients installed on multiple nodes and, just like Kibana , visualizes the data on dashboards on a web interface.

    Graylogs plays a monumental role in making business decisions touching on user interaction of a web application. It collects vital analytics on the apps' behavior and visualizes the data on various graphs such as bar graphs, pie charts, and histograms to mention a few. The data collected inform key business decisions.

    For example, you can determine peak hours when customers place orders using your web application. With such insights in hand, the management can make informed business decisions to scale up revenue.

    Unlike Elastic Search , Graylog offers a single-application solution in data collection, parsing, and visualization. It rids the need for installation of multiple components unlike in ELK stack where you have to install individual components separately. Graylog collects and stores data in MongoDB which is then visualized on user-friendly and intuitive dashboards.

    Graylog is widely used by developers in different phases of app deployment in tracking the state of web applications and obtaining information such as request times, errors, etc. This helps them to modify the code and boost performance.

    3. Fluentd

    Written in C, Fluentd is a cross-platform and opensource log monitoring tool that unifies log and data collection from multiple data sources. It's completely opensource and licensed under the Apache 2.0 license. In addition, there's a subscription model for enterprise use.

    Fluentd processes both structured and semi-structured sets of data. It analyzes application logs, events logs, clickstreams and aims to be a unifying layer between log inputs and outputs of varying types.

    It structures data in a JSON format allowing it to seamlessly unify all facets of data logging including the collection, filtering, parsing, and outputting logs across multiple nodes.

    Fluentd comes with a small footprint and is resource-friendly, so you won't have to worry about running out of memory or your CPU being overutilized. Additionally, it boasts of a flexible plugin architecture where users can take advantage of over 500 community-developed plugins to extend its functionality.

    4. LOGalyze

    LOGalyze is a powerful network monitoring and log management tool that collects and parses logs from network devices, Linux, and Windows hosts. It was initially commercial but is now completely free to download and install without any limitations.

    LOGalyze is ideal for analyzing server and application logs and presents them in various report formats such as PDF, CSV, and HTML. It also provides extensive search capabilities and real-time event detection of services across multiple nodes.

    Like the aforementioned log monitoring tools, LOGalyze also provides a neat and simple web interface that allows users to log in and monitor various data sources and analyze log files .

    5. NXlog

    NXlog is yet another powerful and versatile tool for log collection and centralization. It's a multi-platform log management utility that is tailored to pick up policy breaches, identify security risks and analyze issues in system, application, and server logs.

    NXlog has the capability of collating events logs from numerous endpoints in varying formats including Syslog and windows event logs. It can perform a range of log related tasks such as log rotation, log rewrites. log compression and can also be configured to send alerts.

    You can download NXlog in two editions: The community edition, which is free to download, and use, and the enterprise edition which is subscription-based.

    Tags Linux Log Analyzer , Linux Log Management , Linux Log Monitoring If you liked this article, then do subscribe to email alerts for Linux tutorials. If you have any questions or doubts? do ask for help in the comments section.

    [Jan 27, 2021] Make Bash history more useful with these tips by Seth Kenlon

    Notable quotes:
    "... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
    Jun 25, 2020 | opensource.com

    To block adding a command to the history entries, you can place a space before the command, as long as you have ignorespace in your HISTCONTROL environment variable:

    $ history | tail
    535 echo "foo"
    536 echo "bar"
    $ history -d 536
    $ history | tail
    535 echo "foo"

    You can clear your entire session history with the -c option:

    $ history -c
    $ history
    $ History lessons More on Bash Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong.

    Use your history sessions as needed, and exercise your power over history wisely. Happy history hacking!

    [Jan 03, 2021] 9 things to do in your first 10 minutes on a new to you server

    Jan 03, 2021 | opensource.com

    1. First contact

    As soon as I log into a server, the first thing I do is check whether it has the operating system, kernel, and hardware architecture needed for the tests I will be running. I often check how long a server has been up and running. While this does not matter very much for a test system because it will be rebooted multiple times, I still find this information helpful.

    Use the following commands to get this information. I mostly use Red Hat Linux for testing, so if you are using another Linux distro, use *-release in the filename instead of redhat-release :

    cat / etc / redhat-release
    uname -a
    hostnamectl
    uptime 2. Is anyone else on board?

    Once I know that the machine meets my test needs, I need to ensure no one else is logged into the system at the same time running their own tests. Although it is highly unlikely, given that the provisioning system takes care of this for me, it's still good to check once in a while -- especially if it's my first time logging into a server. I also check whether there are other users (other than root) who can access the system.

    Use the following commands to find this information. The last command looks for users in the /etc/passwd file who have shell access; it skips other services in the file that do not have shell access or have a shell set to nologin :

    who
    who -Hu
    grep sh $ / etc / passwd 3. Physical or virtual machine

    Now that I know I have the machine to myself, I need to identify whether it's a physical machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I have what I asked for. However, if you are using a machine that you did not provision, you should check whether the machine is physical or virtual.

    Use the following commands to identify this information. If it's a physical system, you will see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization software was used to create the VM:

    dmidecode -s system-manufacturer
    dmidecode -s system-product-name
    lshw -c system | grep product | head -1
    cat / sys / class / dmi / id / product_name
    cat / sys / class / dmi / id / sys_vendor 4. Hardware

    Because I often test hardware connected to the Linux machine, I usually work with physical servers, not VMs. On a physical machine, my next step is to identify the server's hardware capabilities -- for example, what kind of CPU is running, how many cores does it have, which flags are enabled, and how much memory is available for running tests. If I am running network tests, I check the type and capacity of the Ethernet or other network devices connected to the server.

    Use the following commands to display the hardware connected to a Linux server. Some of the commands might be deprecated in newer operating system versions, but you can still install them from yum repos or switch to their equivalent new commands:

    lscpu or cat / proc / cpuinfo
    lsmem or cat / proc / meminfo
    ifconfig -a
    ethtool < devname >
    lshw
    lspci
    dmidecode 5. Installed software

    Testing software always requires installing additional dependent packages, libraries, etc. However, before I install anything, I check what is already installed (including what version it is), as well as which repos are configured, so I know where the software comes from, and I can debug any package installation issues.

    Use the following commands to identify what software is installed:

    rpm -qa
    rpm -qa | grep < pkgname >
    rpm -qi < pkgname >
    yum repolist
    yum repoinfo
    yum install < pkgname >
    ls -l / etc / yum.repos.d / 6. Running processes and services

    Once I check the installed software, it's natural to check what processes are running on the system. This is crucial when running a performance test on a system -- if a running process, daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that process before running the tests. This also checks that the processes or daemons the test requires are up and running. For example, if the tests require httpd to be running, the service to start the daemon might not have run even if the package is installed.

    Use the following commands to identify running processes and enabled services on your system:

    pstree -pa 1
    ps -ef
    ps auxf
    systemctl 7. Network connections

    Today's machines are heavily networked, and they need to communicate with other machines or services on the network. I identify which ports are open on the server, if there are any connections from the network to the test machine, if a firewall is enabled, and if so, is it blocking any ports, and which DNS servers the machine talks to.

    Use the following commands to identify network services-related information. If a deprecated command is not available, install it from a yum repo or use the equivalent newer command:

    netstat -tulpn
    netstat -anp
    lsof -i
    ss
    iptables -L -n
    cat / etc / resolv.conf 8. Kernel

    When doing systems testing, I find it helpful to know kernel-related information, such as the kernel version and which kernel modules are loaded. I also list any tunable kernel parameters and what they are set to and check the options used when booting the running kernel.

    Use the following commands to identify this information:

    uname -r
    cat / proc / cmdline
    lsmod
    modinfo < module >
    sysctl -a
    cat / boot / grub2 / grub.cfg

    [Jan 02, 2021] 10 shortcuts to master bash by Guest Contributor

    06, 2025 | TechRepublic

    If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

    The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

    Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

    1. Easily recall previous commands

      Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

    2. Use command aliases

      If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

      bash> alias ls='ls -l' 

      To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

      You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

    3. Use filename auto-completion

      Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

    4. Use key shortcuts to efficiently edit the command line

      Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

    5. Get automatic notification of new mail

      You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

      bash> MAILPATH='/var/spool/mail/john'
      bash> export MAILPATH 

      Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

    6. Run tasks in the background

      Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

      bash> tail -f /var/log/messages &
      [1] 614

      Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

      bash> fg 1

      A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

    7. Quickly jump to frequently-used directories

      You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

      bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
      bash> export CDPATH

      Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

    8. Perform calculations

      Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

      bash> echo $((16/2))
      8
    9. Customise the shell prompt

      You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

      bash> PS1='\u@\h:\w \@> '
      
      bash> export PS1
      root@medusa:/tmp 03:01 PM>

      This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

    10. Get context-specific help

      Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

      bash> help alias
      ...some help text...

      Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

    [Jan 02, 2021] How to convert from CentOS or Oracle Linux to RHEL

    convert2rhel is an RPM package which contains a Python2.x script written in completely incomprehensible over-modulazed manner. Python obscurantism in action ;-)
    Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
    Jan 02, 2021 | access.redhat.com

    [Jan 02, 2021] Linux sysadmin basics- Start NIC at boot

    Nov 14, 2019 | www.redhat.com

    If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.

    Posted: | (Red Hat)

    Image
    "Fast Ethernet PCI Network Interface Card SN5100TX.jpg" by Jana.Wiki is licensed under CC BY-SA 3.0

    It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:

    Image
    Setting the NIC to the ON position during installation.
    More Linux resources

    But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.

    If your NIC isn't enabled at startup, you have to edit the /etc/sysconfig/network-scripts/ifcfg-NIC_name file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with the ip addr command:

    $ ip addr
    
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
    3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
        link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    

    Note that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:

    $ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
    

    and change the ONBOOT="no" entry to ONBOOT="yes" as shown below:

    TYPE="Ethernet"
    PROXY_METHOD="none"
    BROWSER_ONLY="no"
    BOOTPROTO="dhcp"
    DEFROUTE="yes"
    IPV4_FAILURE_FATAL="no"
    IPV6INIT="yes"
    IPV6_AUTOCONF="yes"
    IPV6_DEFROUTE="yes"
    IPV6_FAILURE_FATAL="no"
    IPV6_ADDR_GEN_MODE="stable-privacy"
    NAME="enp0s3"
    UUID="77cb083f-2ad3-42e2-9070-697cb24edf94"
    DEVICE="enp0s3"
    ONBOOT="yes"
    

    Save and exit the file.

    You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.

    To enable the NIC, use the ifup command:

    ifup enp0s3
    
    Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
    

    Now the ip addr command displays the enp0s3 device with an IP address:

    $ ip addr
    
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
           valid_lft 86266sec preferred_lft 86266sec
        inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute 
           valid_lft 3467sec preferred_lft 3467sec
        inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
        link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
           valid_lft forever preferred_lft forever
    4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
        link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    

    To disable a NIC, use the ifdown command. Please note that issuing this command from a remote system will terminate your session:

    ifdown enp0s3
    
    Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
    

    That's a wrap

    It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the ifup command to see if that's your easy fix.

    Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name

    [Jan 02, 2021] Looking forward to Linux network configuration in the initial ramdisk (initrd)

    Nov 24, 2020 | www.redhat.com
    The need for an initrd

    When you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an init process.

    This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.

    First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.

    [ You might also like: Linux networking: 13 uses for netstat ]

    To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.

    The dracut infrastructure

    On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in /usr/lib/dracut/modules.d ."

    A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.

    Kernel command-line

    Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the root=dhcp argument.

    The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .

    The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example, 192.168.50.1:/nfs/client ). Dracut then mounts the NFS share at this location and proceeds with the boot.

    If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:

    root=nfs:192.168.50.1:/nfs/client ip=192.168.50.101:::24::ens2:none
    

    Here, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.

    There are two syntaxes to specify network configuration for an interface:

    ip=<interface>:{dhcp|on|any|dhcp6|auto6}[:[<mtu>][:<macaddr>]]
    
    ip=<client-IP>:[<peer>]:<gateway-IP>:<netmask>:<client_hostname>:<interface>:{none|off|dhcp|on|any|dhcp6|auto6|ibft}[:[<mtu>][:<macaddr>]]
    

    The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:

    ip=enp1s0:dhcp
    ip=192.168.10.30::192.168.10.1:24::enp1s0:none
    ip=[2001:0db8::02]::[2001:0db8::01]:64::enp1s0:none
    

    Note that if you pass an ip= option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, add rd.neednet=1 to the command line.

    You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing ip=ibft to the kernel, the network configuration is retrieved from the firmware.

    Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.

    Network modules

    The dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command dracut --list-modules .

    At the moment, there are two modules to configure the network: network-legacy and network-manager . You might wonder why different modules provide the same functionality.

    network-legacy is older and uses shell scripts calling utilities like iproute2 , dhclient , and arping to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what the network-legacy module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.

    A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.

    To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.

    The network-manager dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.

    Fedora 31 was the first distribution to switch to network-manager in initrd by default. On RHEL 8.2, network-legacy is still the default, but network-manager is available. On RHEL 8.3, dracut will use network-manager by default.

    Enabling a different network module

    While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the nm-initrd-generator man page. In general, it is suggested to use the network-manager module when NetworkManager is enabled.

    To rebuild the initrd using a specific network module, use one of the following commands:

    # dracut --add network-legacy  --force --verbose
    # dracut --add network-manager --force --verbose
    

    Since this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:

    # echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf
    # dracut --regenerate-all --force --verbose
    

    The --regenerate-all option also rebuilds all the initramfs images for the kernel versions found on the system.

    The network-manager dracut module

    As with all dracut modules, the network-manager module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).

    The first stage parses the kernel command-line by calling /usr/libexec/nm-initrd-generator to produce a list of connection profiles in /run/NetworkManager/system-connections . The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.

    When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.

    The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.

    Note that the /run/NetworkManager directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.

    Troubleshooting

    If you have network issues in dracut, this section contains some suggestions for investigating the problem.

    The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in the journal.

    If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:

    rd.break={cmdline|pre-udev|pre-trigger|initqueue|pre-mount|mount|pre-pivot|cleanup}
    

    The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:

    # dracut -f  --install "ping tcpdump"
    

    and then optionally verify that they were included successfully:

    # lsinitrd | grep "ping\|tcpdump"
    Arguments: -f --install 'ping tcpdump'
    -rwxr-xr-x   1 root     root        82960 May 18 10:26 usr/bin/ping
    lrwxrwxrwx   1 root     root           11 May 29 20:35 usr/sbin/ping -> ../bin/ping
    -rwxr-xr-x   1 root     root      1065224 May 29 20:35 usr/sbin/tcpdump
    
    The generator

    If you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.

    The generator is installed in /usr/libexec/nm-initrd-generator and must be called with the list of kernel arguments after a double dash. The --stdout option prints the generated connections on standard output. Let's try to call the generator with a sample command line:

    $ /usr/libexec/nm-initrd-generator --stdout -- \
              ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0
    
    802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC
    address
    

    In this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:

    $ /usr/libexec/nm-initrd-generator --stdout -- \
            ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0
    
    *** Connection 'enp1s0' ***
    
    [connection]
    id=enp1s0
    uuid=e1fac965-4319-4354-8ed2-39f7f6931966
    type=ethernet
    interface-name=enp1s0
    multi-connect=1
    permissions=
    
    [ethernet]
    cloned-mac-address=00:99:88:77:66:55
    mac-address-blacklist=
    
    [ipv4]
    dns-search=
    ignore-auto-dns=true
    may-fail=false
    method=auto
    
    [ipv6]
    addr-gen-mode=eui64
    dns-search=
    ignore-auto-dns=true
    method=auto
    
    [proxy]
    

    Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.

    [ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]

    Conclusion

    The NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.

    While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the configure-and-quit=initrd mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.

    To completely drop the configure-and-quit=initrd mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.

    With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run nm-cloud-setup in the initrd. The service, shipped in the NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.

    [Jan 02, 2021] 11 Linux command line guides you shouldn't be without - Enable Sysadmin

    Jan 02, 2021 | www.redhat.com

    Here are some brief comments about each topic:

    1. How to use the Linux mtr command - The mtr (My Traceroute) command is a major improvement over the old traceroute and is one of my first go-to tools when troubleshooting network problems.
    2. Linux for beginners: 10 commands to get you started at the terminal - Everyone who works on the Linux CLI needs to know some basic commands for moving around the directory structure and exploring files and directories. This article covers those commands in a simple way that places them into a usable context for those of us new to the command line.
    3. Linux for beginners: 10 more commands for manipulating files - One of the most common tasks we all do, whether as a Sysadmin or a regular user, is to manage and manipulate files.
    4. More stupid Bash tricks: Variables, find, file descriptors, and remote operations - These tricks are actually quite smart, and if you want to learn the basics of Bash along with standard IO streams (STDIO), this is a good place to start.
    5. Getting started with systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the basics of systemctl – a powerful tool for managing systemd services and more.
    6. How to use the uniq command to process lists in Linux - Ever had a list in which items can appear multiple times where you only need to know which items appear in the list but not how many times?
    7. A beginner's guide to gawk - gawk is a command line tool that can be used for simple text processing in Bash and other scripts. It is also a powerful language in its own right.
    8. An introduction to the diff command - Sometimes it is important to know the difference.
    9. Looking forward to Linux network configuration in the initial ramdisk (initrd) - The initrd is a critical part of the very early boot process for Linux. Here is a look at what it is and how it works.
    10. Linux troubleshooting: Setting up a TCP listener with ncat - Network troubleshooting sometimes requires tracking specific network packets based on complex filter criteria or just determining whether a connection can be made.
    11. Hard links and soft links in Linux explained - The use cases for hard and soft links can overlap but it is how they differ that makes them both important – and cool.

    [Jan 02, 2021] Reference file descriptors

    Jan 02, 2021 | www.redhat.com

    In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

    Code Meaning Location Description
    0 Standard input /dev/stdin Keyboard, file, or some stream
    1 Standard output /dev/stdout Monitor, terminal, display
    2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

    Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

    $> ls foo/ bar/
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

    $> ls foo/ bar/ 2>/dev/null
    foo/:
    file1
    

    It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

    $> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
    foo:
    file1
    

    Then:

    $> cat ls_out_file
    foo:
    file1
    

    The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

    $> ls foo/ bar/ >to_stdout 2>/dev/null
    $> cat to_stdout
    foo/:
    file1
    

    The following command sends stdout and stderr to the same file:

    $> ls foo/ bar/ >mixed_output 2>&1
    $> cat mixed_output
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    This is what happened in the last example, where stdout and stderr were redirected to the same file:

        ls foo/ bar/ >mixed_output 2>&1
                 |          |
                 |          Redirect stderr to where stdout is sent
                 |                                                        
                 stdout is sent to mixed_output
    

    Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

    $> ls foo/ bar/ &>mixed_output
    

    Here is a more complex redirection:

    exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-
    

    This is what occurs:

    Often it is handy to group commands, and then send the Standard Output to a single file. For example:

    $> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
    Hello world
    

    As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

    [Jan 01, 2021] Looks like potentially Oracle can pickup up to 65% of CentOS users

    Jan 01, 2021 | forums.centos.org

    What do you think of the recent Red Hat announcement about CentOS Linux/Stream?

    I can use either CentOS Linux or Stream and it makes no difference to me
    6
    11%
    I will switch reluctantly to CentOS Stream but I'd rather not
    2
    4%
    I depend on CentOS Linux 8 and its stability and now I need a new alternative
    10
    19%
    I love the idea of CentOS Stream and can't wait to use it
    1
    2%
    I'm off to a different distribution before CentOS 8 sunsets at the end of 2021
    13
    24%
    I feel completely betrayed by this decision and will avoid Red Hat solutions in future
    22
    41%
    Total votes: 54

    [Jan 01, 2021] Oracle Linux DTrace

    Jan 01, 2021 | www.oracle.com

    ... DTrace gives the operational insights that have long been missing in the data center, such as memory consumption, CPU time or what specific function calls are being made.

    Developers can learn about and experiment with DTrace on Oracle Linux by installing the appropriate RPMs:

    [Jan 01, 2021] Oracle Linux vs. Red Hat Enterprise Linux by Jim Brull

    Jan 05, 2019 | www.centroid.com

    ... ... ...

    Here's what we found.

    [Jan 01, 2021] Consider looking at openSUSE (still run out of Germany)

    Jan 01, 2021 | www.reddit.com

    If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on 2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to ~ 365) because this OS will shift from major-minor point updates to a streaming model at the end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL (Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try it out (the installer actually had a few features I liked better than the CentOS-7 installer). When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my employer's solution over to "OpenSUSE Leap".

    Parting thoughts:

    1. openSUSE is run out of Germany. IMHO switching over to a European distro is similar to those database people who preferred MariaDB to MySQL when Oracle was still hoping that MySQL would die from neglect.

    2. Someone cracked off to me the other day that now that IBM is pulling strings at "Red Hat", that the company should be renamed "Blue Hat"

    7 comments 47% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

    general-noob 4 points · 3 days ago

    I downloaded and tried it last week and was actually pretty impressed. I have only ever tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still impressed. I'd recommend people take a look.

    servingwater 2 points · 3 days ago

    I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW also is pretty sweet and if I were to use a rolling release, it would be among the top contenders.

    One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.
    Of course afterwards you can disable it with the proper commands but it becomes a pain with YAST, as it seems YAST insists on being opened by root.

    neilrieck 2 points · 2 days ago

    Thanks for that "heads about" about root

    gdhhorn 1 point · 2 days ago

    One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.

    I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply deselected the option for the account used to be an administrator; this required me to set a different password for administrative purposes.

    Perhaps I'm misunderstanding your comment.

    servingwater 1 point · 2 days ago

    I think you might.
    My point is/was that if I select to choose my regular user to be admin, I don't expect for the system to create and activate a root account anyways and then just assign it my password.
    I expect the root account to be disabled.

    gdhhorn 2 points · 2 days ago

    I didn't realize it made a user, 'root,' and auto generated a password. I'd always assumed if I said to make the user account admin, that was it.

    TIL, thanks.

    servingwater 1 point · 2 days ago

    I was surprised, too. I was bit "shocked" when I realized, after the install, that I could login as root with my user password.
    At the very least, IMHO, it should then still have you set the root password, even if you choose to make your user admin.
    It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to give it a different password.
    But other than that subjective issue I found OpenSUSE Leap a very solid distro.

    [Jan 01, 2021] What about the big academic labs? (Fermilab, CERN, DESY, etc)

    Jan 01, 2021 | www.reddit.com

    The big academic labs (Fermilab, CERN and DESY to only name three of many used to run something called Scientific Linux which was also maintained by Red Hat.see: https://scientificlinux.org/ and https://en.wikipedia.org/wiki/Scientific_Linux Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat) 11 comments 67% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

    phil_g 14 points · 2 days ago

    To clarify, as a user of Scientific Linux:

    Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly a community distribution, Scientific Linux was an independent rebuild of the RHEL source code published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if you have issues with that, it's more up to you to fix them than the SL maintainers.)

    I don't know anything about Red Hat convincing institutions to stop using Scientific Linux; the first I heard about the topic was in April 2019 when Fermilab announced there would be no Scientific Linux 8 . (They may reverse that decision. At the moment, they're " investigating the best path forward ", with a decision to be announced in the first few months of 2021.) level 2 neilrieck 4 points · 2 days ago

    I fear you are correct. I just stumbled onto this article: https://www.linux.com/training-tutorials/scientific-linux-great-distro-wrong-name/ Even the wikipedia article states "This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them." But it does seem that Scientific Linux was created as a replacement for Fermilab Linux. I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page up to date because what I am seeing here ( https://linux.web.cern.ch/ ) today is not what I saw 2-weeks ago.

    There are

    Niarbeht 16 points · 2 days ago

    There are

    Uh oh, guys, they got him!

    deja_geek 9 points · 2 days ago

    RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they needed to (though I don't really think they needed to)

    meat_bunny 10 points · 2 days ago

    Maintaining your own distro is a pain in the ass.

    My crystal ball says they'll just use whatever RHEL rebuild floats to the top in a few months like the rest of us.

    carlwgeorge 2 points · 2 days ago

    SL has always been an independent rebuild. It has never been maintained, sponsored, or owned by Red Hat. They decided on their own to not build 8 and instead collaborate on CentOS. They even gained representation on the CentOS board (one from Fermi, one from CERN).

    I'm not affiliated with any of those organizations, but my guess is they will switch to some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).

    VestoMSlipher 1 point · 11 hours ago

    https://linux.web.cern.ch/#information-on-change-of-end-of-life-for-centos-8

    [Jan 01, 2021] CentOS HAS BEEN CANCELLED !!!

    Jan 01, 2021 | forums.centos.org

    Re: CentOS HAS BEEN CANCELLED !!!

    Post by whoop " 2020/12/08 20:00:36

    Is anybody considering switching to RHEL's free non-production developer subscription? As I understand it, it is free and receives updates.
    The only downside as I understand it is that you have to renew your license every year (and that you can't use it in commercial production).

    [Jan 01, 2021] package management - yum distro-sync

    Jan 01, 2021 | askubuntu.com

    In redhat-based distros, the yum tool has a distro-sync command which will synchronize packages to the current repositories. This command is useful for returning to a base state if base packages have been modified from an outside source. The docs for the command is:

    distribution-synchronization or distro-sync Synchronizes the installed package set with the latest packages available, this is done by either obsoleting, upgrading or downgrading as appropriate. This will "normally" do the same thing as the upgrade command however if you have the package FOO installed at version 4, and the latest available is only version 3, then this command will downgrade FOO to version 3.

    [Dec 30, 2020] Switching from CentOS to Oracle Linux: a hands-on example

    In view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for CentOS, the script can probably be slightly enhanced.
    The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case it was invoked the second time after the failure of the step "Installing base packages for Oracle Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital packages ;-).
    And failures on this step are probably the most common type of failures in conversion. Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running the script the second time.
    It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the function remove_repos, because in their excessive zeal to restore the system after error the programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably should be blocked if Oracle repositories are already present. This check is absent.
    After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of the deleted packages are in yum log, so you can reinstall them and something it helps. In other cases system remain unbootable and the restore from the backup is the only option.
    Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics. The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
    Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure to write a new kernel to it due to the lack of free space leads to the situation of "half-baked" installation, which is difficult to recover without senior sysadmin skills.
    See additional considerations about how to enhance the script at http://www.softpanorama.org/Commercial_linuxes/Oracle_linux/conversion_of_centos_to_oracle_linux.shtml
    Dec 15, 2020 Simon Coter Blog

    ... ... ...

    We published a blog post earlier this week that explains why , but here is the TL;DR version:

    For these reasons, we created a simple script to allow users to switch from CentOS to Oracle Linux about five years ago. This week, we moved the script to GitHub to allow members of the CentOS community to help us improve and extend the script to cover more CentOS respins and use cases.

    The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux. Let's take a look at just how simple the process is.

    Download the centos2ol.sh script from GitHub

    The simplest way to get the script is to use curl :

    $ curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241
    

    If you have git installed, you could clone the git repository from GitHub instead.

    Run the centos2ol.sh script to switch to Oracle Linux

    To switch to Oracle Linux, just run the script as root using sudo :

    $ sudo bash centos2ol.sh
    

    Sample output of script run .

    As part of the process, the default kernel is switched to the latest release of Oracle's Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible Kernel (RHCK) which may be required by any specific hardware or application that has imposed strict kernel version restrictions.

    Switching the default kernel (optional)

    Once the switch is complete, but before rebooting, the default kernel can be changed back to the RHCK. First, use grubby to list all installed kernels:

    [demo@c8switch ~]$ sudo grubby --info=ALL | grep ^kernel
    [sudo] password for demo:
    kernel="/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64"
    kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
    kernel="/boot/vmlinuz-4.18.0-193.el8.x86_64"
    kernel="/boot/vmlinuz-0-rescue-0dbb9b2f3c2744779c72a28071755366"
    

    In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by the switch process while the third one is the kernel that were installed by CentOS and the final entry is the rescue kernel.

    Next, use grubby to verify that UEK is currently the default boot option:

    [demo@c8switch ~]$ sudo grubby --default-kernel
    /boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64
    

    To replace the default kernel, you need to specify either the path to its vmlinuz file or its index. Use grubby to get that information for the replacement:

    [demo@c8switch ~]$ sudo grubby --info /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
    index=1
    kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
    args="ro crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet $tuned_params"
    root="/dev/mapper/cl-root"
    initrd="/boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img $tuned_initrd"
    title="Oracle Linux Server (4.18.0-240.1.1.el8_3.x86_64) 8.3"
    id="0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64"
    

    Finally, use grubby to change the default kernel, either by providing the vmlinuz path:

    [demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
    The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
    

    Or its index:

    [demo@c8switch ~]$ sudo grubby --set-default-index 1
    The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
    

    Changing the default kernel can be done at any time, so we encourage you to take UEK for a spin before switching back.

    It's easy to access, try it out.

    For more information visit oracle.com/linux .

    [Dec 30, 2020] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

    The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old content. Money greedy incompetents is what current upper IBM managers really is...
    Jul 20, 2008 | benincosa.com

    How to be a more productive Linux systems administrator

    Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

    The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

    The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time -- and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

    Trick 1: Unmounting the unresponsive DVD drive

    The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

    Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

    # mount /media/cdrom
    # cd /media/cdrom
    # while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

    Now open up a second terminal and try to eject the DVD drive:

    # eject

    You'll get a message like:

    umount: /media/cdrom: device is busy

    Before you free it, let's find out who is using it.

    # fuser /media/cdrom

    You see the process was running and, indeed, it is our fault we can not eject the disk.

    Now, if you are root, you can exercise your godlike powers and kill processes:

    # fuser -k /media/cdrom

    Boom! Just like that, freedom. Now solemnly unmount the drive:

    # eject

    fuser is good.

    Trick 2: Getting your screen back when it's hosed

    Try this:

    # cat /bin/cat

    Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

    You type reset . But wait you say, typing reset is too close to typing reboot or shutdown . Your palms start to sweat -- especially if you are doing this on a production machine.

    Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

    # reset

    Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

    Trick 3: Collaboration with screen

    David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

    "Fine," you say. "What machine are you on?"

    David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

    # su - david

    Then you go over to posh:

    # ssh posh

    Once you are there, you run:

    # screen -S foo

    Then you holler at David:

    "Hey David, run the following command on your terminal: # screen -x foo ."

    This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

    At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

    The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

    But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

    You can then reattach by running the screen -x foo command again.

    Trick 4: Getting back the root password

    You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

    First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


    Figure 1. GRUB screen after reboot

    Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


    Figure 2. Ready to edit the kernel line

    Use the arrow key again to highlight the line that begins with kernel , and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


    Figure 3. Append the argument with the number 1

    Then press Enter , B , and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

    sh-3.00# passwd
    New UNIX password:
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully

    Now you can reboot, and the machine will boot up with your new password.

    Trick 5: SSH back door

    Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

    In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door . To use it, you'll need a machine on the Internet that you can use as an intermediary.

    In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


    Figure 4. Poking a hole in the firewall

    Here's how to proceed:

    1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
    2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

      You can do this with the following syntax:

      ~# ssh -R 2222:localhost:22 [email protected]

      Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

      thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

      to keep the machine busy. And minimize the window.

    3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

      root@tech:~# ssh [email protected] .

    4. Once tech is on the blackbox, they can SSH to ginger using the following command:

      thedude@blackbox:~$: ssh -p 2222 root@localhost

    5. Tech will then be prompted for a password. They should enter the root password of ginger.
    6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4 .)
    Trick 6: Remote VNC session through an SSH tunnel

    VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

    For example, suppose in Trick 5 , ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

    You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

    Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

    1. Start a VNC server session on ginger. This is done by running something like:

      root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

      The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

      When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

    2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

      root@ginger:~# ssh -R 5999:localhost:5999 [email protected]

      Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

      thedude@blackbox:~$ vncviewer localhost:99

      That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

    3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

      root@tech:~# ssh -L 5999:localhost:5999 [email protected]

      This time the SSH flag we used was -L , which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

    4. From tech, VNC to ginger by running the command:

      root@tech:~# vncviewer localhost:99 .

      Tech will now have a VNC session directly to ginger.

    While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

    Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


    Figure 5. Putty can forward SSH ports for tunneling

    If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

    Trick 7: Checking your bandwidth

    Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

    The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

    So they do this. But now the question is: How much bandwidth do they really have?

    Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

    1Gb = 1024Mb ; 1024Mb/8 = 128MB ; "b" = "bits," "B" = "bytes"

    But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

    # wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

    You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

    tar zxvf iperf*gz
    cd iperf-2.0.2
    ./configure -prefix=/home/bob/perf
    make
    make install

    On ginger, run:

    # /home/bob/perf/bin/iperf -s -f M

    This machine will act as the server and print out performance speeds in MBps.

    On the beckham node, run:

    # /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

    You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

    In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

    I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

    Trick 8: Command-line scripting and utilities

    A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk , grep , and sed . There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

    For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

    # P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
    done >>/etc/hosts

    Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

    As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

    Assume the SSH is set up to authenticate without a password. Then run:

    # for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
    done | sort | uniq

    A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

    First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

    free -m | grep Mem | awk '{print $2}'

    That command says to:

    This operation is performed on every node.

    Once you have performed the command on every node, the entire output of all 200 nodes is piped ( | d) to the sort command so that all the memory values are sorted.

    Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

    This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

    What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

    Trick 9: Spying on the console

    Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1 . This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

    In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

    Trick 10: Random system information collection

    In Trick 8 , you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

    First, let's gather information about the processor. This is easily done as follows:

    # cat /proc/cpuinfo .

    This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

    A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

    # cat /proc/cpuinfo | grep processor | wc -l .

    I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

    Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

    And to end the list, here's a way to look at the firmware of your system -- a method to get the BIOS level and the firmware on the NIC.

    To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

    #dmidecode | less
    ...
    BIOS Information
    Vendor: LENOVO
    Version: 7LET52WW (1.22 )
    Release Date: 08/27/2007
    ...

    This is much more efficient than rebooting your machine and looking at the POST output.

    To examine the driver and firmware versions of your Ethernet adapter, run ethtool :

    # ethtool -i eth0
    driver: e1000
    version: 7.3.20-k2-NAPI
    firmware-version: 0.3-0

    Conclusion

    There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

    I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

    About the author

    Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

    [Dec 30, 2020] HPE ClearOS

    Dec 30, 2020 | arstechnica.com

    The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's in-house distro, ClearOS . Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line, and the company offers a free Community version to all comers.

    ClearOS is an open source software platform that leverages the open source model to deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the integration of free open source technologies making it easier to use. By not charging for open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for the products and services they need and value.

    ClearOS is mostly notable here for its association with industry giant HPE and its availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a bit outdated compared with other options, it also appears to be a rolling release itself -- more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.

    ClearOS is probably most interesting to small business types who might consider buying ProLiant servers with RHEL-compatible OEM Linux pre-installed later.

    [Dec 30, 2020] Where do I go now that CentOS Linux is gone- Check our list - Ars Technica

    Dec 30, 2020 | arstechnica.com

    Springdale Linux

    I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor of CentOS. However, Springdale Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for the academic scientific community. Unlike Scientific Linux, it's still actively maintained!

    Springdale Linux is maintained and made available by Princeton and Rutgers universities, who use it for their HPC projects. It has been around for quite a long time. One Springdale Linux user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS -- Princeton University Institute for Advanced Study) as a 10-year ride.

    Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good seat-of-the-pants overview in a recent mailing list discussion :

    The School of Mathematics at the Institute for Advanced Study has been using Springdale (formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations (yes, workstations) are running Springdale. On the server side, everything "just works", as is expected from a RHEL clone. On the workstation side, most of the issues we run into have to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc), but most issues have been resolved or have a workaround in place.

    ... Springdale is a community project, and [it] mostly comes down to the hours (mostly Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some are like us and use the whole thing. Others use a different OS and use Springdale just for its computational repositories.

    Springdale Linux should be a natural fit for universities and scientists looking for a CentOS replacement. It will likely work for most anyone who needs it -- but its relatively small community and firm roots in academia will probably make it the most comfortable for those with similar needs and environments.

    [Dec 30, 2020] GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

    Dec 30, 2020 | distrowatch.com

    64"best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United States)
    @62 dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

    [Dec 30, 2020] Scientific Linux website states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

    Dec 30, 2020 | distrowatch.com

    Centos (by David on 2020-12-22 04:29:46 GMT from United States)
    I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!

    Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501, the graphics were garbage --the screen displayed kind of a color mosaic --and the keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from minimal dvd produced a bunch of errors and then froze part way through.

    I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

    [Dec 30, 2020] Springdale vs. CentOS

    Dec 30, 2020 | distrowatch.com

    52Springdale vs. CentOS (by whoKnows on 2020-12-23 05:39:01 GMT from Switzerland)

    @51 • Personal opinion only. (by R. Cain)

    "Personal opinion only. [...] After all the years of using Linux, and experiencing first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which has all the earmarks of [...] being developed AND MAINTAINED by a professional organization."

    Yeah, your answer is exactly what I expected it to be.

    The thing with Springdale is as following: it's maintained by the very professional team of IT specialists at the Institute for Advanced Study (Princeton University) for the own needs. That's why there's no fancy website, RHEL Wiki, live ISOs and such.

    They also maintain several other repositories for add-on packages (computing, unsupported [with audio/video codecs] ...).

    With other words, if you're a professional who needs an RHEL clone, you'll be fine with it; if you're a hobbyist who needs a how-to on everything and anything, you can still use the knowledge base of RHEL/CentOS/Oracle ...

    If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS, Springdale is not a commercial distribution selling you support and schooling. Springdale is made by professional and for the professionals.

    https://www.ias.edu/math/computing/Springdale-Linux
    https://researchcomputing.princeton.edu/faq/what-is-a-cluster

    [Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux

    Highly recommended!
    Google translation
    Notable quotes:
    "... Free to use, free to download, free to update. Always ..."
    "... Unbreakable Enterprise Kernel ..."
    "... (What You Get Is What You Get ..."
    Dec 30, 2020 | blog.microlinux.fr

    In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I had just published Linux on the onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99% like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their future Linux administrators.

    All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS project, etc. I didn't care any more, since CentOS perfectly met all my server needs.

    Following the disastrous announcement of the CentOS project, I had a discussion with my compatriot Michael Kofler, a Linux guru who has published a series of excellent books on our favorite operating system, and who has migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the University of Graz. We were not in our first discussion on this subject, as the CentOS project was already accumulating a series of rather worrying delays for version 8 updates. In comparison, Oracle Linux does not suffer from these structural problems, so I kept this option in a corner of my head.

    A problematic reputation

    Oracle suffers from a problematic reputation within the free software community, for a variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his unhinged support for Donald Trump. As for the company's commercial policy, it has been marked by a notorious aggressiveness in the hunt for patents.

    On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise since 2006, and which is also a free and free operating system.

    Install Oracle Linux

    For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure is identical to that of RHEL and CentOS, with a few small details.

    Oracle Linux Installation

    Info Normally, I never care about banner ads that scroll through graphic installers. This time, the slogan Free to use, free to download, free to update. Always still caught my attention.

    An indestructible kernel?

    Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux 7.9.

    $ uname -a
    Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48 
    PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
    
    Well-crafted packet deposits

    At first glance, the organization of official and semi-official package filings seems much clearer and better organized than under CentOS. For details, I refer you to the respective explanatory pages for the 7.x and 8.x versions.

    Well-structured documentation

    Like the organization of deposits, Oracle Linux's documentation is worth mentioning here, because it is simply exemplary. The main index refers to the different versions of Oracle Linux, and from there, you can access a whole series of documents in HTML and PDF formats that explain in detail the peculiarities of the system and its day-to-day management. As I go along with this documentation, I discover a multitude of pleasant little details, such as the fact that Oracle packages display metadata for security updates, which is not the case for CentOS packages.

    Migrating from CentOS to Oracle Linux

    The Switch your CentOS systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a better choice than CentOS when you want to have a company-grade free as in free beer operating system, which provides low-risk updates for each version over a decade. This page also features a script that transforms an existing CentOS system into a two-command Oracle Linux system on the fly. centos2ol.sh

    So I tested this script on a CentOS 7 server from Online/Scaleway.

    # curl -O https://linux.oracle.com/switch/centos2ol.sh
    # chmod +x centos2ol.sh
    # ./centos2ol.sh
    

    The script grinds about twenty minutes, we restart the machine and we end up with a clean Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.

    # rm -f /etc/yum.repos.d/*.repo.deactivated
    
    Migrating a CentOS 8.x server?

    At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9. On a whim, I sent an email to the address at the bottom of the page, asking if support for CentOS 8.x was expected in the near future. centos2ol.sh

    A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle Linux 8.x.

    So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.

    # yum install git
    # git clone https://github.com/oracle/centos2ol.git
    # cd centos2ol/
    # chmod +x centos2ol.sh
    # ./centos2ol.sh
    

    Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a public machine running oracle Linux 8.

    Conclusion

    I will probably have a lot more to say about that. For my part, I find this first experience with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will probably solve a common problem to a lot of admins of production servers who do not support their system becoming a moving target overnight.

    Post Scriptum for the chilly purists

    Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained by Princeton University in the United States according to the principle WYGIWYG (What You Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but it works just as well.


    Writing this documentation takes time and significant amounts of espresso coffee. Do you like this blog? Give the editor a coffee by clicking on the cup.

    [Dec 29, 2020] Oracle Linux is "CentOS done right"

    Notable quotes:
    "... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
    "... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
    "... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
    Dec 21, 2020 | distrowatch.com

    Microlinux

    And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)

    If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject:

    https://blog.microlinux.fr/migration-centos-oracle-linux/

    Currently Rocky Linux is not much more than a README file on Github and a handful of Slack (ew!) discussion channels.

    Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers.

    dragonmouth

    9@Jesse on CentOS: (by dragonmouth on 2020-12-21 13:11:04 GMT from United States)

    "There is no rush and I recommend waiting a bit for the dust to settle on the situation before leaping to an alternative. "

    For private users there may be plenty of time to find an alternative. However, corporate IT departments are not like jet skis able to turn on a dime. They are more like supertankers or aircraft carriers that take miles to make a turn. By the time all the committees meet and come to some decision, by the time all the upper managers who don't know what the heck they are talking about expound their opinions and by the time the CentOS replacement is deployed, a year will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to start looking for the O/S that will replace CentOS.

    Ricardo

    "This looks like the vendor equivalent..." (by Ricardo on 2020-12-21 18:06:49 GMT from Argentina)

    [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on."

    Jesse, I couldn't have articulated it better. I'm stealing that phrase :)

    Cheers and happy holidays to everyone!

    [Dec 28, 2020] Time to move to Oracle Linux

    Dec 28, 2020 | www.cyberciti.biz
    Kyle Dec 9, 2020 @ 2:13

    It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply it to work.

    I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20

    As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui Dec 9, 2020 @ 4:49

    Hello CentOS users,

    I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide. Holmes Dec 9, 2020 @ 5:06

    Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like "EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08

    For me it's simple.
    I will keep my openSUSE Leap and expand it's footprint.
    Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the RHEL kernel.
    OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications as well. Someone Dec 9, 2020 @ 5:23

    While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24

    That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're now a happy and successful FreeBSD shop.

    [Dec 28, 2020] This quick and dirty hack worked fine to convert centos 8 to oracle linux 8

    Notable quotes:
    "... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
    Dec 28, 2020 | blog.centos.org

    Phil says: December 9, 2020 at 2:10 pm

    this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

    repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
    wget \
    ${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
    ${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
    ${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
    ${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
    
    rpm -e centos-linux-release --nodeps
    dnf --disablerepo='*' localinstall ./*rpm 
    :> /etc/dnf/vars/ociregion
    dnf remove centos-linux-repos
    dnf --refresh distro-sync
    # since I wanted to try out the unbreakable enterprise kernel:
    dnf install kernel-uek
    reboot
    dnf remove kernel
    

    [Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco

    Highly recommended!
    " People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary compatibility to RHEL 8 and security updates until 2029."
    One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
    Notable quotes:
    "... availability gap ..."
    "... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
    "... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
    "... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
    "... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
    "... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
    "... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
    "... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
    "... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
    "... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
    Dec 28, 2020 | blog.centos.org

    Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company with an open source development methodology.

    To do so Red Hat made a hard decision and in 2003 split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized, and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a single project. After that, each distribution flourished within its intended audiences.

    But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving, stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap -- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability gap, but they were closed to contributions to the core Linux distro itself.

    In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from. At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates in Fedora led to breakage in these layered projects.

    We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project core team, Red Hat and the CentOS Project agreed to " join forces ." We said joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever before -- a partial solution to the openness gap.

    Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.

    Over the past six years since finally joining forces, we made good progress on those goals. We started Special Interest Groups (SIGs) to manage the layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a governance structure where there hadn't been one before. We brought RHEL source code to be housed at git.centos.org . We designed and built out a significant public build infrastructure and CI/CD system in a project that had previously been sealed-boxes all the way down.


    cmdrlinux says: December 19, 2020 at 2:36 pm

    "This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."

    Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.

    Mark Danon says: December 19, 2020 at 4:14 pm

    Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them do the work? Be thankful for a distribution that you have been using so far, and move on.

    Dave says: December 20, 2020 at 7:16 am

    We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community.

    If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle expectations for CentOS 7.

    Konstantin says: December 21, 2020 at 12:24 am

    Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people actually knowing the intrinsics of how CentOS Linux was being developed.

    If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.

    No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use beta testers (CentOS Stream *is* beta, whatever you say).

    I predict you will be somewhat amazed at the actual results.

    Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?

    Chris Mair says: December 20, 2020 at 3:21 pm

    To: [email protected]
    To: [email protected]

    Hi,

    Re: https://blog.centos.org/2020/12/balancing-the-needs-around-the-centos-platform/

    you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving the EOL of CentOS Linux 8 from 2029 to 2021.

    You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions.

    Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later.

    The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the changes.

    I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.

    What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information, how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?

    And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview of what's next in RHEL".

    I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I can trust to get updates until the given EOL date.

    You've made me look elsewhere for that.

    -- Chris

    Chip says: December 20, 2020 at 6:16 pm

    I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch. Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.

    Alex says: December 21, 2020 at 12:51 am

    I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.

    Nika jous says: December 21, 2020 at 1:43 pm

    Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.

    Ang says: December 22, 2020 at 2:36 am

    IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say you are reading them, but choose to ignore it and that is even worse!

    People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first place?

    Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no, you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.

    Why didn't you even bother asking the community first and come to a compromise or something?

    Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet! So stop pretending and address it properly!

    Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent in support of becoming a contributor-focused open source project"

    And you FAILED! Where was the transparency?!

    Ang says: December 22, 2020 at 2:36 am

    A link to the issue: https://git.centos.org/centos/board/issue/1

    AP says: December 22, 2020 at 6:55 am

    What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.

    Len Inkster says: December 22, 2020 at 4:13 pm

    As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH?

    Anyone who wants to continue with CENTOS? Fork the project and maintain it yourselves. That how we got to CENTOS from Linus Torvalds original Linux.

    Peter says: December 22, 2020 at 5:36 pm

    I can only comment this as disappointment, if not betrayal, to whole CentOS user base. This decision was clearly done, without considering impact to majority of CentOS community use cases.

    If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream. Clear and simple. All other 'explanations' are cover ups for real purpose of this action.

    This stinks of politics within IBM/RH meddling with CentOS. I hope, Rocky will bring the desired stability, that community was relying on with CentOS.

    Goodbye CentOS, it was nice 15 years.

    Ken Sanderson says: December 23, 2020 at 1:57 pm

    We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.

    [Dec 28, 2020] Red Hat Goes Full IBM and Says Farewell to CentOS - ServeTheHome

    Dec 28, 2020 | www.servethehome.com

    The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...

    [Dec 28, 2020] Call our sales people and open your wallet if you use CentOS in prod

    Dec 28, 2020 | freedomben.medium.com

    It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

    If you are using CentOS Linux 8 in a production environment, and are
    concerned that CentOS Stream will not meet your needs, we encourage you
    to contact Red Hat about options.

    That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

    ... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

    ... ... ...

    I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

    Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

    [Dec 27, 2020] Red Hat expects you to call their sales people and open your wallet if you use CentOS in production. That will not happen.

    Dec 27, 2020 | freedomben.medium.com

    It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

    If you are using CentOS Linux 8 in a production environment, and are
    concerned that CentOS Stream will not meet your needs, we encourage you
    to contact Red Hat about options.

    That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

    ... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

    ... ... ...

    I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

    Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

    [Dec 27, 2020] Why Red Hat dumped CentOS for CentOS Stream by Steven J. Vaughan-Nichols

    Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So "Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
    This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less licesses, not more by adopting "One IBM way"
    Dec 21, 2020 | www.zdnet.com

    On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

    Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."

    A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."

    Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.

    ... ... ...

    Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

    ... ... ...

    [Dec 27, 2020] There are now countless Internet servers out there that run CentOS. This is why the Debian project is so important.

    Dec 27, 2020 | freedomben.medium.com

    There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.

    Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.

    This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.

    [Dec 23, 2020] Red Hat and GPL: uneasy romance ended long ego, but Red Hat still depends on GPL as it does not develop many components and gets them for free from the community and other vendors

    It all about money and about executive bonuses: shortsighted executive want more and more money as if the current huge revenue is not enough...
    Dec 23, 2020 | www.zdnet.com

    former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."

    Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

    Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:

    CentOS Linux is NOT supported in any way by Red Hat, Inc.

    CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.

    CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.

    CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.

    We don't owe you anything."

    [Dec 23, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

    Sep 03, 2017 | www.poftut.com

    Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

    Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

    Syntax
    patch [options] [originalfile [patchfile]] 
     
     
    patch -pnum <patchfile
    
    Help
    $ patch --help
    
    Help
    Help
    Create Patch File

    Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

    myapp_old.c
    #include <stdio.h>  
      
    void main(){  
      
    printf("Hi poftut");  
      
    }
    
    myapp.c
    #include <stdio.h>  
      
    void main(){  
      
    printf("Hi poftut");  
     
    printf("This is new line as a patch"); 
      
    }
    

    Now we will create a patch file named myapp.patch .

    $ diff -u myapp_old.c myapp.c > myapp.patch
    
    Create Patch File
    Create Patch File

    We can print myapp.patch file with following command

    $ cat myapp.patch
    
    Apply Patch File

    Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

    $ patch < myapp.patch
    
    Apply Patch File
    Apply Patch File
    Take Backup Before Applying Patch

    One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

    $ patch -b < myapp.patch
    
    Take Backup Before Applying Patch
    Take Backup Before Applying Patch

    The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

    Set Backup File Version

    While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

    $ patch -b -V numbered < myapp.patch
    
    Set Backup File Version
    Set Backup File Version

    As we can see from screenshot the new backup file is named as number like myapp.c.~1~

    Validate Patch File Without Applying or Dry run

    We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

    $ patch --dry-run < myapp.patch
    
    Reverse Patch

    Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

    $ patch -R myapp_old.c < myapp.patch
    
    Reverse Patch
    Reverse Patch

    As we can see that new changes are reverted back.

    LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
    1. David K Hill 07/11/2019 at 4:15 am

      Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

      The ability to reverse the patch was most helpful!

      Reply
    2. Javed 28/12/2019 at 7:16 pm

      very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

    [Dec 23, 2020] HowTo Apply a Patch File To My Linux

    Dec 23, 2020 | www.cyberciti.biz

    A note about working on an entire source tree

    First, make a copy of the source tree:
    ## Original source code is in lighttpd-1.4.35/ directory ##
    $ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

    Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
    $ cd lighttpd-1.4.35-new/
    $ vi geoip-mod.c
    $ vi Makefile

    Finally, create a patch with the following command:
    $ cd ..
    $ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

    You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
    patch -p1
    See the man page of patch and other command for more information and usage - bash(1)

    [Dec 22, 2020] HowTo Apply a Patch File To My Linux - UNIX Source Code - nixCraft

    Dec 22, 2020 | www.cyberciti.biz

    A note about working on an entire source tree

    First, make a copy of the source tree:
    ## Original source code is in lighttpd-1.4.35/ directory ##
    $ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

    Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
    $ cd lighttpd-1.4.35-new/
    $ vi geoip-mod.c
    $ vi Makefile

    Finally, create a patch with the following command:
    $ cd ..
    $ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

    You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
    patch -p1
    See the man page of patch and other command for more information and usage - bash(1)

    [Dec 22, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

    Sep 03, 2017 | www.poftut.com

    Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

    Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

    Syntax
    patch [options] [originalfile [patchfile]] 
     
     
    patch -pnum <patchfile
    
    Help
    $ patch --help
    
    Help
    Help
    Create Patch File

    Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

    myapp_old.c
    #include <stdio.h>  
      
    void main(){  
      
    printf("Hi poftut");  
      
    }
    
    myapp.c
    #include <stdio.h>  
      
    void main(){  
      
    printf("Hi poftut");  
     
    printf("This is new line as a patch"); 
      
    }
    

    Now we will create a patch file named myapp.patch .

    $ diff -u myapp_old.c myapp.c > myapp.patch
    
    Create Patch File
    Create Patch File

    We can print myapp.patch file with following command

    $ cat myapp.patch
    
    Apply Patch File

    Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

    $ patch < myapp.patch
    
    Apply Patch File
    Apply Patch File
    Take Backup Before Applying Patch

    One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

    $ patch -b < myapp.patch
    
    Take Backup Before Applying Patch
    Take Backup Before Applying Patch

    The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

    Set Backup File Version

    While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

    $ patch -b -V numbered < myapp.patch
    
    Set Backup File Version
    Set Backup File Version

    As we can see from screenshot the new backup file is named as number like myapp.c.~1~

    Validate Patch File Without Applying or Dry run

    We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

    $ patch --dry-run < myapp.patch
    
    Reverse Patch

    Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

    $ patch -R myapp_old.c < myapp.patch
    
    Reverse Patch
    Reverse Patch

    As we can see that new changes are reverted back.

    LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
    1. David K Hill 07/11/2019 at 4:15 am

      Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

      The ability to reverse the patch was most helpful!

      Reply
    2. Javed 28/12/2019 at 7:16 pm

      very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

    [Dec 10, 2020] Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

    Dec 10, 2020 | blog.centos.org

    Ward Mundy says: December 9, 2020 at 3:12 am

    Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.

    Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

    [Dec 10, 2020] Does Oracle Linux have staying power against Red Hat

    Notable quotes:
    "... If you need official support, Oracle support is generally cheaper than RedHat. ..."
    "... You can legally run OL free and have access to patches/repositories. ..."
    "... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
    "... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
    "... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
    Dec 10, 2020 | blog.centos.org

    Matthew Stier says: December 8, 2020 at 8:11 pm

    My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.

    Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

    art_ok 1 point· 5 minutes ago

    We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.

    Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.

    I found Oracle Linux has several advantages over RedHat/CentOS:

    If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.

    Overall, it is pleasure to work and support OL.

    Negatives:

    I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).
    art_ok 1 point· just now

    Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).

    [Dec 10, 2020] Backlash against Red Hat management started

    At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
    Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you does not matter. And organized minority always beat disorganized majority. Actually most large organizations will probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
    What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool of corporate mind control over the user population. Feat is a powerful instrument of mind control.
    In reality security of most systems on internal network does no change one bit with patches. And on external network only applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the whole world)
    Notable quotes:
    "... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
    "... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
    "... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
    "... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
    "... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
    "... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
    "... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
    "... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
    "... More likely to drive people entirely out of the RHEL ecosystem. ..."
    "... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/ ..."
    "... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
    "... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
    "... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
    "... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
    "... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
    "... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
    "... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
    "... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
    "... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
    "... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
    "... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
    "... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
    "... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
    "... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
    "... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
    "... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
    "... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
    "... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
    Dec 10, 2020 | blog.centos.org

    Internet User says: December 8, 2020 at 5:13 pm

    This is a pretty clear indication that you people are completely out of touch with your users.

    Joel B. D. says: December 8, 2020 at 5:17 pm

    Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?

    "If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?

    Michael says: December 8, 2020 at 8:31 pm

    That's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.

    OS says: December 8, 2020 at 6:20 pm

    First CoreOS, now CentOS. It's about time to switch to one of the *BSDs.

    JD says: December 8, 2020 at 6:35 pm

    Wow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.

    ShameOnIBM says: December 8, 2020 at 7:07 pm

    IBM is declining, hence they need more profit from "useless" product line. So disgusting

    MLF says: December 8, 2020 at 7:15 pm

    An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?

    MM says: December 8, 2020 at 7:28 pm

    Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.

    Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.

    I say this without any official or unofficial involvement with the Debian project, other than being a user.

    And we have just launched hundred of Centos 8 servers.

    Faisal Sehbai says: December 8, 2020 at 7:32 pm

    Another one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!

    Disgusting!

    William Smith says: December 8, 2020 at 7:39 pm

    This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.

    Daniele Brunengo says: December 8, 2020 at 7:48 pm

    I just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.

    David Potterveld says: December 8, 2020 at 8:08 pm

    What a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.

    a says: December 8, 2020 at 9:08 pm

    From a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.

    From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody

    I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.

    Ralf says: December 8, 2020 at 9:08 pm

    Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

    I'm a current user of old school CentOS, so keep your promise, Mr CTO.

    Tamas says: December 8, 2020 at 10:01 pm

    That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."

    https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

    Konstantin says: December 9, 2020 at 3:36 pm

    From the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'

    Read again: CentOS Stream is not a production operating system. 'Nuff said.

    Samuel C. says: December 8, 2020 at 10:53 pm

    This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.

    Brendan says: December 9, 2020 at 12:15 am

    Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.

    vinci says: December 8, 2020 at 11:45 pm

    I can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...

    Peter Vonway says: December 8, 2020 at 11:56 pm

    What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.

    Scott says: December 9, 2020 at 8:38 am

    This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.

    Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.

    They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.

    OSLover says: December 9, 2020 at 12:09 am

    So sad. Not only breaking the support promise but so quickly (2021!)

    Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.

    Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.

    What a sad way to end 2020.

    technick says: December 9, 2020 at 12:09 am

    I don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.

    RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?

    ConcernedAdmin says: December 9, 2020 at 12:47 am

    Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.

    John says: December 9, 2020 at 1:32 am

    Assinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.

    Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me

    Concerned Fren says: December 9, 2020 at 1:52 am

    In hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.

    This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.

    Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.

    William Ashford says: December 9, 2020 at 2:02 am

    So the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.

    Christian Reiss says: December 9, 2020 at 6:28 am

    This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.

    For the software (CentOS) I hope for a quick death that will not drag on for years.

    Ian says: December 9, 2020 at 2:10 am

    This is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.

    There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.

    The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.

    cody says: December 9, 2020 at 4:53 am

    Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.

    Ganesan Rajagopal says: December 9, 2020 at 5:09 am

    Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!

    Bomel says: December 9, 2020 at 6:22 am

    I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.

    Steve says: December 9, 2020 at 8:57 am

    IBM are seeing every CentOS install as a missed RHEL subscription...

    Ralf says: December 9, 2020 at 10:29 am

    Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?

    Michel-André says: December 9, 2020 at 5:18 pm

    Hi all,

    Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

    Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !

    Mihel-André

    PeteVM says: December 9, 2020 at 5:27 pm

    As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.

    CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.

    JadeK says: December 9, 2020 at 6:36 pm

    I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.

    Congrats fellas, you did it. You perfected the transition to Debian from CentOS.

    Godimir Kroczweck says: December 9, 2020 at 8:21 pm

    I find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..

    Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?

    I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.

    Paul R says: December 9, 2020 at 9:14 pm

    Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM

    Nicholas Knight says: December 9, 2020 at 9:34 pm

    Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.

    [Dec 10, 2020] GPL bites Red hat in the butt: they might faceemarge of CentOs alternative due to the wave of support for such distro

    Dec 10, 2020 | blog.centos.org

    Sam Callis says: December 8, 2020 at 3:58 pm

    I have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.

    And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.

    Sieciowski says: December 9, 2020 at 11:19 am

    well, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?

    If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?

    Joe says: December 9, 2020 at 11:47 am

    That's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.

    Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pm

    You are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.

    Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?

    Matt Phelps says: December 8, 2020 at 4:12 pm

    This is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.

    Move this approach to CentOS 9

    fahrradflucht says: December 8, 2020 at 5:37 pm

    This! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.

    Gregory Kurtzer says: December 8, 2020 at 4:27 pm

    I am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website hpcng.org).

    Greg (original founder of CentOS)

    Reply
    A says: December 8, 2020 at 7:11 pm

    Not a programmer, but I'd certainly use it. I hope you get it off the ground.

    Michael says: December 8, 2020 at 8:26 pm

    This sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?

    Bond Masuda says: December 8, 2020 at 11:53 pm

    Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.

    Rex says: December 9, 2020 at 3:46 am

    Dear Greg,

    Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.

    The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.

    dovla091 says: December 9, 2020 at 10:47 am

    Wikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money

    dan says: December 9, 2020 at 4:00 am

    Please keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.

    Chad Gregory says: December 9, 2020 at 7:21 pm

    Not sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.

    Vasile M says: December 8, 2020 at 8:43 pm

    LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.

    [Dec 10, 2020] Amazon Linux 2

    Dec 10, 2020 | aws.amazon.com

    Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.

    Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.

    [Dec 10, 2020] A letter to IBM brass

    Notable quotes:
    "... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
    "... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
    "... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
    "... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
    "... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
    "... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
    "... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
    Dec 10, 2020 | blog.centos.org

    Orsiris de Jong says: December 9, 2020 at 9:41 am

    Dear IBM,

    As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
    Reasons of that choice were stability, long term support and good hardware vendor support.

    Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.

    Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.

    The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.

    Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.

    Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.

    Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.

    This will make RHEL less and less widely used by startups, enthusiasts and others.

    CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.

    Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.

    As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.

    Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.

    This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.

    I think you destroyed a large part of the RHEL / CentOS community with this move today.

    Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.

    ... ... ...

    [Dec 10, 2020] CentOS will be RHEL's beta, but CentOS denies this

    IBM have a history of taking over companies and turning them into junk, so I am not that surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat acquisition.
    Notable quotes:
    "... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
    "... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
    Dec 10, 2020 | www.zdnet.com

    I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

    If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.

    Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024

    [Dec 10, 2020] Time to bring back Scientific Linux

    Notable quotes:
    "... I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. ..."
    Dec 10, 2020 | www.reddit.com

    I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX-ANNOUNCE;11d6001.1904 l

    clickwir 19 points· 1 day ago

    Time to bring back Scientific Linux.

    [Dec 10, 2020] CentOS Project: Embraced, extended, extinguished.

    Notable quotes:
    "... My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that. ..."
    Dec 10, 2020 | www.reddit.com

    KugelKurt 18 points 1 day ago

    I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.

    The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.

    I don't think a rolling release distribution is what BMD wants.

    My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.

    [Dec 10, 2020] Oracle Linux -- A better alternative to CentOS

    Currently limited of CentOS 6 and CentOS7.
    Dec 10, 2020 | linux.oracle.com
    Oracle Linux: A better alternative to CentOS

    We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.

    But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

    We're putting Oracle Linux in your hands by doing two things:

    We think you'll like what you find, and we'd love for you to give it a try.

    FAQ

    Wait, doesn't Oracle Linux cost money?
    Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at yum.oracle.com . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
    Is this just another CentOS?
    Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
    How is this better than CentOS?
    Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:


    Delay in kernel security advisories since January 2018: CentOS vs Oracle Linux; CentOS has large fluctuations in delays

    So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.

    What about the code quality?
    Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
    What if I want support?
    If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.

    Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.

    Why are you doing this?
    This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.

    At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.

    How do I make the switch?
    Run the following as root:

    curl -O https://linux.oracle.com/switch/centos2ol.sh
    sh centos2ol.sh

    What versions of CentOS can I switch?
    centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
    What does the script do?
    The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
    Is it safe?
    The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing [email protected] .

    [Dec 10, 2020] The demise of CentOs and independent training providers

    Dec 10, 2020 | blog.centos.org

    Anthony Mwai

    says: December 8, 2020 at 8:44 pm

    IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.

    We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.

    Joe says: December 9, 2020 at 1:03 pm

    You can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.

    Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from yum.oracle.com

    TechSmurf says: December 9, 2020 at 12:38 am

    Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.

    Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.

    I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.

    David Anderson says: December 8, 2020 at 7:16 pm

    "OEL is literally a rebranded RH."

    So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - https://www.oracle.com/ar/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf - "Oracle Linux is easy to download and completely free to use, distribute, and update."

    Bill Murmor says: December 9, 2020 at 5:04 pm

    So, what's the problem?

    IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.

    k1 says: December 9, 2020 at 7:58 pm

    Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.

    [Dec 10, 2020] Oracle has a converter script for CentOS 7. And here is a quick hack to convert CentOs8 to Oracle Linux

    You can use Oracle Linux exactly like CentOS, only better
    Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more trustworthy than RedHat/IBM. But I guess such things do happen with time..."
    Notable quotes:
    "... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
    "... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
    Dec 10, 2020 | blog.centos.org

    Charlie F. says: December 8, 2020 at 6:37 pm

    Oracle has a converter script for CentOS 7, and they will sell you OS support after you run it:

    https://linux.oracle.com/switch/centos/

    It would be nice if Oracle would update that for CentOS 8.

    David Anderson says: December 8, 2020 at 7:15 pm

    The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.

    Max Grü says: December 9, 2020 at 2:05 pm

    Oracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."

    Reply
    Phil says: December 9, 2020 at 2:10 pm

    this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

    repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
    wget \
    ${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
    ${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
    ${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
    ${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
    rpm -e centos-linux-release --nodeps
    dnf --disablerepo='*' localinstall ./*rpm 
    :> /etc/dnf/vars/ociregion
    dnf remove centos-linux-repos
    dnf --refresh distro-sync
    # since I wanted to try out the unbreakable enterprise kernel:
    dnf install kernel-uek
    reboot
    dnf remove kernel

    [Dec 10, 2020] Linux Subshells for Beginners With Examples - LinuxConfig.org

    Dec 10, 2020 | linuxconfig.org

    Bash allows two different subshell syntaxes, namely $() and back-tick surrounded statements. Let's look at some easy examples to start:

    $ echo '$(echo 'a')'
    $(echo a)
    $ echo "$(echo 'a')"
    a
    $ echo "a$(echo 'b')c"
    abc
    $ echo "a`echo 'b'`c"
    abc
    

    SUBSCRIBE TO NEWSLETTER
    Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

    https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=1566064928&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710648&bpp=17&bdt=1664&idt=-M&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&correlator=5015138629854&frm=20&pv=2&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=577243598424319&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=1663&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530671%2C21068083&oid=3&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1Bdmecc8YW&p=https%3A//linuxconfig.org&dtd=634


    In the first command, as an example, we used ' single quotes. This resulted in our subshell command, inside the single quotes, to be interpreted as literal text instead of a command. This is standard Bash: ' indicates literal, " indicates that the string will be parsed for subshells and variables.

    In the second command we swap the ' to " and thus the string is parsed for actual commands and variables. The result is that a subshell is being started, thanks to our subshell syntax ( $() ), and the command inside the subshell ( echo 'a' ) is being executed literally, and thus an a is produced, which is then inserted in the overarching / top level echo . The command at that stage can be read as echo "a" and thus the output is a .

    In the third command, we further expand this to make it clearer how subshells work in-context. We echo the letter b inside the subshell, and this is joined on the left and the right by the letters a and c yielding the overall output to be abc in a similar fashion to the second command.

    In the fourth and last command, we exemplify the alternative Bash subshell syntax of using back-ticks instead of $() . It is important to know that $() is the preferred syntax, and that in some remote cases the back-tick based syntax may yield some parsing errors where the $() does not. I would thus strongly encourage you to always use the $() syntax for subshells, and this is also what we will be using in the following examples.

    Example 2: A little more complex
    $ touch a
    $ echo "-$(ls [a-z])"
    -a
    $ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-"
    -=-||-rw-rw-r-- 1 roel roel 0 Sep  5 09:26 a||-=-
    

    Here, we first create an empty file by using the touch a command. Subsequently, we use echo to output something which our subshell $(ls [a-z]) will generate. Sure, we can execute the ls directly and yield more or less the same result, but note how we are adding - to the output as a prefix.

    In the final command, we insert some characters at the front and end of the echo command which makes the output look a bit nicer. We use a subshell to first find the a file we created earlier ( ls [a-z] ) and then - still inside the subshell - pass the results of this command (which would be only a literally - i.e. the file we created in the first command) to the ls -l using the pipe ( | ) and the xargs command. For more information on xargs, please see our articles xargs for beginners with examples and multi threaded xargs with examples .

    Example 3: Double quotes inside subshells and sub-subshells!
    echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')"
    it surely works
    

    https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=2724449972&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&adsid=ChEIgM2S_gUQzN_42M_QwuOnARIqAL1WgU0IPKMTPLYMrAFnUAY1w18hzIzNy0CGR82uXn3xCpt9jLaEISQY&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710249&bpp=9&bdt=1264&idt=211&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&prev_fmts=754x189%2C0x0&nras=1&correlator=5015138629854&frm=20&pv=1&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=2308974393696511&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=3548&biw=1519&bih=762&scr_x=0&scr_y=513&eid=42530671%2C21068083&oid=3&psts=AGkb-H_kCAb-qHdw3GuwXq6RB3MJbClRq9VISu7n8l1rpQZCm8sfL6sdfh-BMTltKIaB0w&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&jar=2020-11-30-05&ifi=2&uci=a!2&btvi=2&fsb=1&xpc=0ozDx6KTCC&p=https%3A//linuxconfig.org&dtd=4517


    Cool, no? Here we see that double quotes can be used inside the subshell without generating any parsing errors. We also see how a subshell can be nested inside another subshell. Are you able to parse the syntax? The easiest way is to start "in the middle or core of all subshells" which is in this case would be the simple echo "it works" .

    This command will output it works as a result of the subshell call $(echo "it works") . Picture it works in place of the subshell, i.e.

    echo "$(echo "it works" | sed 's|it|it surely|')"
    it surely works
    

    This looks simpler already. Next it is helpful to know that the sed command will do a substitute (thanks to the s command just before the | command separator) of the text it to it surely . You can read the sed command as replace __it__ with __it surely__. The output of the subshell will thus be it surely works`, i.e.

    echo "it surely works"
    it surely works
    
    Conclusion

    In this article, we have seen that subshells surely work (pun intended), and that they can be used in wide variety of circumstances, due to their ability to be inserted inline and within the context of the overarching command. Subshells are very powerful and once you start using them, well, there will likely be no stopping. Very soon you will be writing something like:

    $ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'
    

    This one is for you to try and play around with! Thank you and goodbye

    [Dec 10, 2020] Top 10 Awesome Linux Screen Tricks by Isaias Irizarry

    May 13, 2011 | blog.urfix.com

    Screen or as I like to refer to it "Admin's little helper"
    Screen is a window manager that multiplexes a physical terminal between several processes

    here are a couple quick reasons you'd might use screen

    Lets say you have a unreliable internet connection you can use screen and if you get knocked out from your current session you can always connect back to your session.

    Or let's say you need more terminals, instead of opening a new terminal or a new tab just create a new terminal inside of screen

    Here are the screen shortcuts to help you on your way Screen shortcuts

    and here are some of the Top 10 Awesome Linux Screen tips urfix.com uses all the time if not daily.

    1) Attach screen over ssh
    ssh -t remote_host screen -r
    

    Directly attach a remote screen session (saves a useless parent bash process)

    2) Share a terminal screen with others
    % screen -r someuser/
    
    3) Triple monitoring in screen
    tmpfile=$(mktemp) && echo -e 'startup_message off\nscreen -t top  htop\nsplit\nfocus\nscreen
    -t nethogs nethogs  wlan0\nsplit\nfocus\nscreen -t iotop iotop' > $tmpfile &&
    sudo screen -c $tmpfile
    
    

    This command starts screen with 'htop', 'nethogs' and 'iotop' in split-screen. You have to have these three commands (of course) and specify the interface for nethogs – mine is wlan0, I could have acquired the interface from the default route extending the command but this way is simpler.

    htop is a wonderful top replacement with many interactive commands and configuration options. nethogs is a program which tells which processes are using the most bandwidth. iotop tells which processes are using the most I/O.

    The command creates a temporary "screenrc" file which it uses for doing the triple-monitoring. You can see several examples of screenrc files here: http://www.softpanorama.org/Utilities/Screen/screenrc_examples.shtml

    4) Share a 'screen'-session
    screen -x
    

    Ater person A starts his screen-session with `screen`, person B can attach to the srceen of person A with `screen -x`. Good to know, if you need or give support from/to others.

    5) Start screen in detached mode
    screen -d -m [<command>]
    

    Start screen in detached mode, i.e., already running on background. The command is optional, but what is the purpose on start a blank screen process that way?
    It's useful when invoking from a script (I manage to run many wget downloads in parallel, for example).

    6) Resume a detached screen session, resizing to fit the current terminal
    screen -raAd.
    

    By default, screen tries to restore its old window sizes when attaching to resizable terminals. This command is the command-line equivalent to typing ^A F to fit an open screen session to the window

    7) use screen as a terminal emulator to connect to serial consoles
    screen /dev/tty<device> 9600
    

    Use GNU/screen as a terminal emulator for anything serial console related.

    screen /dev/tty

    eg.

    screen /dev/ttyS0 9600

    8) ssh and attach to a screen in one line.
    ssh -t user@host screen -x <screen name>
    

    If you know the benefits of screen, then this might come in handy for you. Instead of ssh'ing into a machine and then running a screen command, this can all be done on one line instead. Just have the person on the machine your ssh'ing into run something like
    screen -S debug
    Then you would run
    ssh -t user@host screen -x debug
    and be attached to the same screen session.

    9) connect to all screen instances running
    screen -ls | grep pts | gawk '{ split($1, x, "."); print x[1] }' | while read i; do gnome-terminal -e screen\ -dx\ $i; done
    

    connects to all the screen instances running.

    10) Quick enter into a single screen session
    alias screenr='screen -r $(screen -ls | egrep -o -e '[0-9]+' | head -n 1)'
    

    There you have 'em folks the top 10 screen commands. enjoy!

    [Dec 10, 2020] Possibility to change only year or only month in date

    Jan 01, 2017 | unix.stackexchange.com

    Gilles

    491k 109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69

    > ,

    Christian Severin , 2017-09-29 09:47:52

    You can use e.g. date --set='-2 years' to set the clock back two years, leaving all other elements identical. You can change month and day of month the same way. I haven't checked what happens if that calculation results in a datetime that doesn't actually exist, e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both date and time to concrete values" behaviour. – Christian Severin Sep 29 '17 at 9:47

    Michael Homer , 2014-08-22 09:44:23

    Use date -s :
    date -s '2014-12-25 12:34:56'
    

    Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock.

    To change only part of the time, you can use command substitution in the date string:

    date -s "2014-12-25 $(date +%H:%M:%S)"
    

    will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S .

    > ,

    > , 2014-08-22 09:51:41

    I don't want to change the time – SHW Aug 22 '14 at 9:51

    Michael Homer , 2014-08-22 09:55:00

    There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)" to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at 9:55

    chaos , 2014-08-22 09:59:58

    System time

    You can use date to set the system date. The GNU implementation of date (as found on most non-embedded Linux-based systems) accepts many different formats to set the time, here a few examples:

    set only the year:

    date -s 'next year'
    date -s 'last year'
    

    set only the month:

    date -s 'last month'
    date -s 'next month'
    

    set only the day:

    date -s 'next day'
    date -s 'tomorrow'
    date -s 'last day'
    date -s 'yesterday'
    date -s 'friday'
    

    set all together:

    date -s '2009-02-13 11:31:30' #that's a magical timestamp
    

    Hardware time

    Now the system time is set, but you may want to sync it with the hardware clock:

    Use --show to print the hardware time:

    hwclock --show
    

    You can set the hardware clock to the current system time:

    hwclock --systohc
    

    Or the system time to the hardware clock

    hwclock --hctosys
    

    > ,

    garethTheRed , 2014-08-22 09:57:11

    You change the date with the date command. However, the command expects a full date as the argument:
    # date -s "20141022 09:45"
    Wed Oct 22 09:45:00 BST 2014
    

    To change part of the date, output the current date with the date part that you want to change as a string and all others as date formatting variables. Then pass that to the date -s command to set it:

    # date -s "$(date +'%Y12%d %H:%M')"
    Mon Dec 22 10:55:03 GMT 2014
    

    changes the month to the 12th month - December.

    The date formats are:

    Balmipour , 2016-03-23 09:10:21

    For ones like me running ESXI 5.1, here's what the system answered me
    ~ # date -s "2016-03-23 09:56:00"
    date: invalid date '2016-03-23 09:56:00'
    

    I had to uses a specific ESX command instead :

    esxcli system time set  -y 2016 -M 03 -d 23  -H 10 -m 05 -s 00
    

    Hope it helps !

    > ,

    Brook Oldre , 2017-09-26 20:03:34

    I used the date command and time format listed below to successfully set the date from the terminal shell command performed on Android Things which uses the Linux Kernal.

    date 092615002017.00

    MMDDHHMMYYYY.SS

    MM - Month - 09

    DD - Day - 26

    HH - Hour - 15

    MM - Min - 00

    YYYY - Year - 2017

    .SS - second - 00

    > ,

    [Dec 09, 2020] Is Oracle A Real Alternative To CentOS

    Notable quotes:
    "... massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html ..."
    Dec 09, 2020 | centosfaq.org

    Is Oracle A Real Alternative To CentOS? Home " CentOS " Is Oracle A Real Alternative To CentOS? December 8, 2020 Frank Cox CentOS 33 Comments

    Is Oracle a real alternative to CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.

    But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert CentOS 7 to Oracle. Nothing about CentOS 8 in that script, though.

    https://linux.oracle.com/switch/ CentOS /

    That page seems to say that Oracle Linux is everything that CentOS was prior to today's announcement.

    But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.

    So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?

    Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.

    But is it, really?

    Karl Vogel says: December 9, 2020 at 3:05 am

    ... ... ..

    Go to https://linux.oracle.com/switch/CentOS/ , poke around a bit, and you end up here:
    https://yum.oracle.com/oracle-linux-downloads.html

    I just went to the ISO page and I can grab whatever I like without signing up for anything, so nothing's changed since I first used it.

    ... ... ...

    Gianluca Cecchi says: December 9, 2020 at 3:30 am

    [snip]

    Only to point out that while in CentOS (8.3, but the same in 7.x) the situation is like this:

    [g.cecchi@skull8 ~]$ ll /etc/redhat-release /etc/CentOS-release
    -rw-rr 1 root root 30 Nov 10 16:49 /etc/CentOS-release lrwxrwxrwx 1 root root 14 Nov 10 16:49 /etc/redhat-release -> CentOS-release
    [g.cecchi@skull8 ~]$

    [g.cecchi@skull8 ~]$ cat /etc/CentOS-release CentOS Linux release 8.3.2011

    in Oracle Linux (eg 7.7) you get two different files:

    $ ll /etc/redhat-release /etc/oracle-release 
    -rw-rr 1 root root 32 Aug 8 2019 /etc/oracle-release 
    -rw-rr 1 root root 52 Aug 8 2019 /etc/redhat-release 
    $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
    $ cat /etc/oracle-release Oracle Linux Server release 7.7 

    This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:

    # lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6 
    Release: 7.6 
    Codename: n/a 
    # 

    Gianluca

    Rainer Traut says: December 9, 2020 at 4:18 am

    Am 08.12.20 um 18:54 schrieb Frank Cox:

    Yes, it is better than CentOS and in some aspects better than RHEL:

    faster security updates than CentOS, directly behind RHEl
    better kernels than RHEL and CentOS (UEKs) wih more features
    free to download (no subscription needed):
    https://yum.oracle.com/oracle-linux-isos.html
    free to use:
    https://yum.oracle.com/oracle-linux-8.html
    massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html

    Rainer Traut says: December 9, 2020 at 4:26 am

    Hi,

    Am 08.12.20 um 19:03 schrieb Jon Pruente:

    KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell you to use the Oracle Cloud. There's other things, but that stood out immediately.

    Oracle Linux FAQ (PDF): https://www.oracle.com/a/ocom/docs/027617.pdf

    There is no subscription needed. All needed repositories for the oVirt based virtualization are freely available.

    https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/getstart/manager-install.html#manager-install-prepare

    Rainer Traut says: December 10, 2020 at 4:40 am

    Am 09.12.20 um 17:52 schrieb Frank Cox:

    I'll try to answer best to my knowledge.

    I have an oracle account but never used it for/with Oracle linux. There are oracle communities where you need an oracle account: https://community.oracle.com/tech/apps-infra/categories/oracle_linux

    Niki Kovacs says: December 10, 2020 at 10:22 am

    Le 10/12/2020 17:18, Frank Cox a crit :

    That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle Linux system.

    Cheers,

    Niki

    --
    Microlinux Solutions informatiques durables
    7, place de l'glise 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : [email protected] Tl. : 04 66 63 10 32
    Mob. : 06 51 80 12 12

    Ljubomir Ljubojevic says: December 10, 2020 at 12:53 pm

    There is always Springdale Linux made by Princeton University: https://puias.math.ias.edu/

    Johnny Hughes says: December 10, 2020 at 4:10 pm

    Am 10.12.20 um 19:53 schrieb Ljubomir Ljubojevic:

    I did a conversion of a test webserver from C8 to Springdale. It went smoothly.

    Niki Kovacs says: December 12, 2020 at 11:29 am

    Le 08/12/2020 18:54, Frank Cox a crit :

    I spent the last three days experimenting with it. Here's my take on it: https://blog.microlinux.fr/migration-CentOS-oracle-linux/

    tl;dr: Very nice if you don't have any qualms about the company.

    Cheers,

    Niki

    --
    Microlinux Solutions informatiques durables 7, place de l'glise 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : [email protected] Tl. : 04 66 63 10 32
    Mob. : 06 51 80 12 12

    Frank Cox says: December 12, 2020 at 11:52 am

    That's a really excellent article, Nicholas. Thanks ever so much for posting about your experience.

    Peter Huebner says: December 15, 2020 at 5:07 am

    Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gibovskis:

    According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license cost, no need for a contract, and no usage audits."

    Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory requirement. see: https://www.oracle.com/linux

    No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra commercial license. Same as in CentOS.

    So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
    --
    Peter Huebner

    [Nov 29, 2020] Provisioning a system

    Nov 29, 2020 | opensource.com

    We've gone over several things you can do with Ansible on your system, but we haven't yet discussed how to provision a system. Here's an example of provisioning a virtual machine (VM) with the OpenStack cloud solution.

    - name: create a VM in openstack
    osp_server:
    name: cloudera-namenode
    state: present
    cloud: openstack
    region_name: andromeda
    image: 923569a-c777-4g52-t3y9-cxvhl86zx345
    flavor_ram: 20146
    flavor: big
    auto_ip: yes
    volumes: cloudera-namenode 

    All OpenStack modules start with os , which makes it easier to find them. The above configuration uses the osp-server module, which lets you add or remove an instance. It includes the name of the VM, its state, its cloud options, and how it authenticates to the API. More information about cloud.yml is available in the OpenStack docs, but if you don't want to use cloud.yml, you can use a dictionary that lists your credentials using the auth option. If you want to delete the VM, just change state: to absent .

    Say you have a list of servers you shut down because you couldn't figure out how to get the applications working, and you want to start them again. You can use os_server_action to restart them (or rebuild them if you want to start from scratch).

    Here is an example that starts the server and tells the modules the name of the instance:

    - name: restart some servers
    os_server_action:
    action: start
    cloud: openstack
    region_name: andromeda
    server: cloudera-namenode 

    Most OpenStack modules use similar options. Therefore, to rebuild the server, we can use the same options but change the action to rebuild and add the image we want it to use:

    os_server_action:
    action: rebuild
    image: 923569a-c777-4g52-t3y9-cxvhl86zx345

    [Nov 29, 2020] bootstrap.yml

    Nov 29, 2020 | opensource.com

    For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:

    $ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -Kk
    

    The script makes use of Ansible's raw module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).

    The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:

    ---
    - name : Set user accounts
    hosts : all
    gather_facts : false
    become : yes
    vars_prompt :
    - name : passwd
    prompt : "Enter the desired ansible password:"
    private : yes

    tasks :
    - name : Add child 1 account
    user :
    state : present
    name : child1
    password : "{{ passwd | password_hash('sha512') }}"
    comment : Child One
    uid : 888
    group : users
    shell : /bin/bash
    generate_ssh_key : yes
    ssh_key_bits : 2048
    update_password : always
    create_home : yes

    The vars_prompt section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.

    The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:

    - name : Install kids software
    apt :
    name : "{{ packages }}"
    state : present
    vars :
    packages :
    - lxde
    - childsplay
    - tuxpaint
    - tuxtype
    - pysycache
    - pysiogame
    - lmemory
    - bouncy

    I created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.

    For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.

    The moment of truth

    A few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.

    I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.

    I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.

    Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).

    Conclusion

    Taking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and -- thanks to Ansible's idempotency -- easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.

    Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!

    [Nov 25, 2020] What you need to know about Ansible modules by Jairo da Silva Junior

    Mar 04, 2019 | opensource.com

    Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled.

    These modules are mapped to resources and their respective states , which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration.

    With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it.

    More on Ansible The contract for building modules is simple: JSON in the stdout . The configurations declared in YAML files are delivered over the network via SSH/WinRM -- or any other connection plugin -- as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules).

    Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other.

    Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:

    Ansible's official docs are a good resource on developing plugins .

    When should you develop a module?

    Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific -- for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on developing modules .

    IMPORTANT: Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the development list and/or existing working groups to see if a module exists or is in development.

    Signs that you need a new module instead of using an existing one include:

    In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML).

    Identifying good and bad playbooks
    "Make love, but don't make a shell script in YAML."

    So, what makes a bad playbook?

    - name : Read a remote resource
    command : "curl -v http://xpto/resource/abc"
    register : resource
    changed_when : False

    - name : Create a resource in case it does not exist
    command : "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: *.* } }'"
    when : "resource.stdout | 404"

    # Leave it here in case I need to remove it hehehe
    #- name: Remove resource
    # command: "curl -X DELETE http://xpto/resource/abc"
    # when: resource.stdout == 1

    Aside from being very fragile -- what if the resource state includes a 404 somewhere? -- and demanding extra code to be idempotent, this playbook can't update the resource when its state changes.

    Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state.

    Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files.

    Here's how to rewrite this example to follow infrastructure-as-code principles.

    - name : XPTO
    xpto :
    name : abc
    state : present
    config :
    client : xyz
    url : http://beta
    pattern : "*.*"

    The benefits of this approach, based on custom modules , include:

    Implementing a custom module

    Let's use WildFly , an open source Java application server, as an example to introduce a custom module for our not-so-good playbook:

    - name : Read datasource
    command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'"
    register : datasource

    - name : Create datasource
    command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'"
    when : 'datasource.stdout | outcome => failed'

    Problems:

    A custom module for this would look like:

    - name : Configure datasource
    jboss_resource :
    name : "/subsystem=datasources/data-source=DemoDS"
    state : present
    attributes :
    driver-name : h2
    connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
    jndi-name : "java:jboss/datasources/DemoDS"
    user-name : sa
    password : sa
    min-pool-size : 20
    max-pool-size : 40

    This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state.

    Why learn to build custom modules?

    Good reasons to learn how to build custom modules include:

    " abstractions save us time working, but they don't save us time learning." -- Joel Spolsky, The Law of Leaky Abstractions
    Custom Ansible modules 101 The Ansible way An alternative: drop it in the library directory library/ # if any custom modules, put them here (optional)
    module_utils/ # if any custom module_utils to support modules, put them here (optional)
    filter_plugins/ # if any custom filter plugins, put them here (optional)

    site.yml # master playbook
    webservers.yml # playbook for webserver tier
    dbservers.yml # playbook for dbserver tier

    roles/
    common/ # this hierarchy represents a "role"
    library/ # roles can also include custom modules
    module_utils/ # roles can also include custom module_utils
    lookup_plugins/ # or other types of plugins, like lookup in this case

    TIP: You can use this directory layout to overwrite existing modules if, for example, you need to patch a module.

    First steps

    You could do it in your own -- including using another language -- or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( exit_json() , fail_json() ) in the way Ansible expects ( msg , meta , has_changed , result ), and it's also easier to process the input ( params[] ) and log its execution ( log() , debug() ).

    def main () :

    arguments = dict ( name = dict ( required = True , type = 'str' ) ,
    state = dict ( choices = [ 'present' , 'absent' ] , default = 'present' ) ,
    config = dict ( required = False , type = 'dict' ))

    module = AnsibleModule ( argument_spec = arguments , supports_check_mode = True )
    try :
    if module. check_mode :
    # Do not do anything, only verifies current state and report it
    module. exit_json ( changed = has_changed , meta = result , msg = 'Fez alguma coisa ou não...' )

    if module. params [ 'state' ] == 'present' :
    # Verify the presence of a resource
    # Desired state `module.params['param_name'] is equal to the current state?
    module. exit_json ( changed = has_changed , meta = result )

    if module. params [ 'state' ] == 'absent' :
    # Remove the resource in case it exists
    module. exit_json ( changed = has_changed , meta = result )

    except Error as err:
    module. fail_json ( msg = str ( err ))

    NOTES: The check_mode ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. Also, the module_utils directory can be used for shared code among different modules.

    For the full Wildfly example, check this pull request .

    Running tests The Ansible way

    The Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, Shippable , which includes linting, unit tests, and integration tests.

    For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code:

    - name : Configure datasource
    jboss_resource :
    name : "/subsystem=datasources/data-source=DemoDS"
    state : present
    attributes :
    connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
    ...
    register : result

    - name : assert output message that datasource was created
    assert :
    that :
    - "result.changed == true"
    - "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" An alternative: bundling a module with your role

    Here is a full example inside a simple role:

    Molecule + Vagrant + pytest : molecule init (inside roles/)

    It offers greater flexibility to choose:

    But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about using Molecule .

    tyberious 5 hours ago

    They were trying to overcome the largest Electoral and Popular Vote Victory in American History!

    It was akin to dousing a 4 alarm fire with a garden hose, eventually you will get burned! play_arrow 95 play_arrow pocomotion 5 hours ago

    Tyberious, thanks for sharing your thoughts. I think you are correct is your assessment. play_arrow 15 play_arrow 1 systemsplanet 4 hours ago

    I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

    https://thedonald.win/p/11QRtZSsD4/

    The game is rigged y_arrow 1 Stainmaker 4 hours ago

    I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

    Of course it doesn't matter when you have Lester Holt interviewing Joe Biden and asking whether Creepy Joe's administration is going to continue investigating Trump. Whatever happened to Hunter's laptop and the hundreds of millions in Russian, Ukrainian & Chinese bribes anyway? 7 play_arrow HelluvaEngineer 5 hours ago

    So far, they are winning. Got an idea of a path to victory? I don't. Americans are fvcking stupid. play_arrow 24 play_arrow 1 tyberious 5 hours ago

    They like to think they are!

    Go here and follow!

    https://www.thegatewaypundit.com/?ff_source=Email&ff_medium=the-gateway-pundit&ff_campaign=dailypm&ff_content=daily

    [Nov 25, 2020] My top 5 Ansible modules by Mark Phillips

    Nov 25, 2019 | opensource.com

    5. authorized_key

    Secure shell (SSH) is at the heart of Ansible, at least for almost everything besides Windows. Key (no pun intended) to using SSH efficiently with Ansible is keys ! Slight aside -- there are a lot of very cool things you can do for security with SSH keys. It's worth perusing the authorized_keys section of the sshd manual page . Managing SSH keys can become laborious if you're getting into the realms of granular user access, and although we could do it with either of my next two favourites, I prefer to use the module because it enables easy management through variables .

    4. file

    Besides the obvious function of placing a file somewhere, the file module also sets ownership and permissions. I'd say that's a lot of bang for your buck with one module. I'd proffer a substantial portion of security relates to setting permissions too, so the file module plays nicely with authorized_keys .

    3. template

    There are so many ways to manipulate the contents of files, and I see lots of folk use lineinfile . I've used it myself for small tasks. However, the template module is so much clearer because you maintain the entire file for context. My preference is to write Ansible content in such a way that anyone can understand it easily -- which to me means not making it hard to understand what is happening. Use of template means being able to see the entire file you're putting into place, complete with the variables you are using to change pieces.

    2. uri

    Many modules in the current distribution leverage Ansible as an orchestrator. They talk to another service, rather than doing something specific like putting a file into place. Usually, that talking is over HTTP too. In the days before many of these modules existed, you could program an API directly using the uri module. It's a powerful access tool, enabling you to do a lot. I wouldn't be without it in my fictitious Ansible shed.

    1. shell

    The joker card in our pack. The Swiss Army Knife. If you're absolutely stuck for how to control something else, use shell . Some will argue we're now talking about making Ansible a Bash script -- but, I would say it's still better because with the use of the name parameter in your plays and roles, you document every step. To me, that's as big a bonus as anything. Back in the days when I was still consulting, I once helped a database administrator (DBA) migrate to Ansible. The DBA wasn't one for change and pushed back at changing working methods. So, to ease into the Ansible way, we called some existing DB management scripts from Ansible using the shell module. With an informative name statement to accompany the task.

    You can ac hieve a lot with these five modules. Yes, modules designed to do a specific task will make your life even easier. But with a smidgen of engineering simplicity, you can achieve a lot with very little. Ansible developer Brian Coca is a master at it, and his tips and tricks talk is always worth a watch.

    [Nov 25, 2020] 10 Ansible modules for Linux system automation by Ricardo Gerardi

    Nov 25, 2020 | opensource.com

    10 Ansible modules for Linux system automation These handy modules save time and hassle by automating many of your daily tasks, and they're easy to implement with a few commands. 26 Oct 2020 Ricardo Gerardi (Red Hat) Feed 69 up 3 comments Image by : Opensource.com x Subscribe now

    Get the highlights in your inbox every week.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    Ansible is a complete automation solution for your IT environment. You can use Ansible to automate Linux and Windows server configuration, orchestrate service provisioning, deploy cloud environments, and even configure your network devices.

    Ansible modules abstract actions on your system so you don't need to worry about implementation details. You simply describe the desired state, and Ansible ensures the target system matches it.

    This module availability is one of Ansible's main benefits, and it is often referred to as Ansible having "batteries included." Indeed, you can find modules for a great number of tasks, and while this is great, I frequently hear from beginners that they don't know where to start.

    Although your choice of modules will depend exclusively on your requirements and what you're trying to automate with Ansible, here are the top ten modules you need to get started with Ansible for Linux system automation.

    1. copy

    The copy module allows you to copy a file from the Ansible control node to the target hosts. In addition to copying the file, it allows you to set ownership, permissions, and SELinux labels to the destination file. Here's an example of using the copy module to copy a "message of the day" configuration file to the target hosts:

    - name: Ensure MOTD file is in place
    copy:
    src: files / motd
    dest: / etc / motd
    owner: root
    group: root
    mode: 0644

    For less complex content, you can copy the content directly to the destination file without having a local file, like this:

    - name: Ensure MOTD file is in place
    copy:
    content: "Welcome to this system."
    dest: / etc / motd
    owner: root
    group: root
    mode: 0644

    This module works idempotently , which means it will only copy the file if the same file is not already in place with the same content and permissions.

    The copy module is a great option to copy a small number of files with static content. If you need to copy a large number of files, take a look at the synchronize module. To copy files with dynamic content, take a look at the template module next.

    2. template

    The template module works similarly to the copy module, but it processes content dynamically using the Jinja2 templating language before copying it to the target hosts.

    For example, define a "message of the day" template that displays the target system name, like this:

    $ vi templates / motd.j2
    Welcome to {{ inventory_hostname }} .

    Then, instantiate this template using the template module, like this:

    - name: Ensure MOTD file is in place
    template:
    src: templates / motd.j2
    dest: / etc / motd
    owner: root
    group: root
    mode: 0644

    Before copying the file, Ansible processes the template and interpolates the variable, replacing it with the target host system name. For example, if the target system name is rh8-vm03 , the result file is:

    Welcome to rh8-vm03.

    While the copy module can also interpolate variables when using the content parameter, the template module allows additional flexibility by creating template files, which enable you to define more complex content, including for loops, if conditions, and more. For a complete reference, check Jinja2 documentation .

    This module is also idempotent, and it will not copy the file if the content on the target system already matches the template's content.

    3. user

    The user module allows you to create and manage Linux users in your target system. This module has many different parameters, but in its most basic form, you can use it to create a new user.

    For example, to create the user ricardo with UID 2001, part of the groups users and wheel , and password mypassword , apply the user module with these parameters:

    - name: Ensure user ricardo exists
    user:
    name: ricardo
    group: users
    groups: wheel
    uid: 2001
    password: "{{ 'mypassword' | password_hash('sha512') }}"
    state: present

    Notice that this module tries to be idempotent, but it cannot guarantee that for all its options. For instance, if you execute the previous module example again, it will reset the password to the defined value, changing the user in the system for every execution. To make this example idempotent, use the parameter update_password: on_create , ensuring Ansible only sets the password when creating the user and not on subsequent runs.

    You can also use this module to delete a user by setting the parameter state: absent .

    The user module has many options for you to manage multiple user aspects. Make sure you take a look at the module documentation for more information.

    4. package

    The package module allows you to install, update, or remove software packages from your target system using the operating system standard package manager.

    For example, to install the Apache web server on a Red Hat Linux machine, apply the module like this:

    - name: Ensure Apache package is installed
    package:
    name: httpd
    state: present More on Ansible This module is distribution agnostic, and it works by using the underlying package manager, such as yum/dnf for Red Hat-based distributions and apt for Debian. Because of that, it only does basic tasks like install and remove packages. If you need more control over the package manager options, use the specific module for the target distribution.

    Also, keep in mind that, even though the module itself works on different distributions, the package name for each can be different. For instance, in Red Hat-based distribution, the Apache web server package name is httpd , while in Debian, it is apache2 . Ensure your playbooks deal with that.

    This module is idempotent, and it will not act if the current system state matches the desired state.

    5. service

    Use the service module to manage the target system services using the required init system; for example, systemd .

    In its most basic form, all you have to do is provide the service name and the desired state. For instance, to start the sshd service, use the module like this:

    - name: Ensure SSHD is started
    service:
    name: sshd
    state: started

    You can also ensure the service starts automatically when the target system boots up by providing the parameter enabled: yes .

    As with the package module, the service module is flexible and works across different distributions. If you need fine-tuning over the specific target init system, use the corresponding module; for example, the module systemd .

    Similar to the other modules you've seen so far, the service module is also idempotent.

    6. firewalld

    Use the firewalld module to control the system firewall with the firewalld daemon on systems that support it, such as Red Hat-based distributions.

    For example, to open the HTTP service on port 80, use it like this:

    - name: Ensure port 80 ( http ) is open
    firewalld:
    service: http
    state: enabled
    permanent: yes
    immediate: yes

    You can also specify custom ports instead of service names with the port parameter. In this case, make sure to specify the protocol as well. For example, to open TCP port 3000, use this:

    - name: Ensure port 3000 / TCP is open
    firewalld:
    port: 3000 / tcp
    state: enabled
    permanent: yes
    immediate: yes

    You can also use this module to control other firewalld aspects like zones or complex rules. Make sure to check the module's documentation for a comprehensive list of options.

    7. file

    The file module allows you to control the state of files and directories -- setting permissions, ownership, and SELinux labels.

    For instance, use the file module to create a directory /app owned by the user ricardo , with read, write, and execute permissions for the owner and the group users :

    - name: Ensure directory / app exists
    file:
    path: / app
    state: directory
    owner: ricardo
    group: users
    mode: 0770

    You can also use this module to set file properties on directories recursively by using the parameter recurse: yes or delete files and directories with the parameter state: absent .

    This module works with idempotency for most of its parameters, but some of them may make it change the target path every time. Check the documentation for more details.

    8. lineinfile

    The lineinfile module allows you to manage single lines on existing files. It's useful to update targeted configuration on existing files without changing the rest of the file or copying the entire configuration file.

    For example, add a new entry to your hosts file like this:

    - name: Ensure host rh8-vm03 in hosts file
    lineinfile:
    path: / etc / hosts
    line: 192.168.122.236 rh8-vm03
    state: present

    You can also use this module to change an existing line by applying the parameter regexp to look for an existing line to replace. For example, update the sshd_config file to prevent root login by modifying the line PermitRootLogin yes to PermitRootLogin no :

    - name: Ensure root cannot login via ssh
    lineinfile:
    path: / etc / ssh / sshd_config
    regexp: '^PermitRootLogin'
    line: PermitRootLogin no
    state: present

    Note: Use the service module to restart the SSHD service to enable this change.

    This module is also idempotent, but, in case of line modification, ensure the regular expression matches both the original and updated states to avoid unnecessary changes.

    9. unarchive

    Use the unarchive module to extract the contents of archive files such as tar or zip files. By default, it copies the archive file from the control node to the target machine before extracting it. Change this behavior by providing the parameter remote_src: yes .

    For example, extract the contents of a .tar.gz file that has already been downloaded to the target host with this syntax:

    - name: Extract contents of app.tar.gz
    unarchive:
    src: / tmp / app.tar.gz
    dest: / app
    remote_src: yes

    Some archive technologies require additional packages to be available on the target system; for example, the package unzip to extract .zip files.

    Depending on the archive format used, this module may or may not work idempotently. To prevent unnecessary changes, you can use the parameter creates to specify a file or directory that this module would create when extracting the archive contents. If this file or directory already exists, the module does not extract the contents again.

    10. command

    The command module is a flexible one that allows you to execute arbitrary commands on the target system. Using this module, you can do almost anything on the target system as long as there's a command for it.

    Even though the command module is flexible and powerful, it should be used with caution. Avoid using the command module to execute a task if there's another appropriate module available for that. For example, you could create users by using the command module to execute the useradd command, but you should use the user module instead, as it abstracts many details away from you, taking care of corner cases and ensuring the configuration only changes when necessary.

    For cases where no modules are available, or to run custom scripts or programs, the command module is still a great resource. For instance, use this module to run a script that is already present in the target machine:

    - name: Run the app installer
    command: "/app/install.sh"

    By default, this module is not idempotent, as Ansible executes the command every single time. To make the command module idempotent, you can use when conditions to only execute the command if the appropriate condition exists, or the creates argument, similarly to the unarchive module example.

    What's next?

    Using these modules, you can configure entire Linux systems by copying, templating, or modifying configuration files, creating users, installing packages, starting system services, updating the firewall, and more.

    If you are new to Ansible, make sure you check the documentation on how to create playbooks to combine these modules to automate your system. Some of these tasks require running with elevated privileges to work. For more details, check the privilege escalation documentation.

    As of Ansible 2.10, modules are organized in collections. Most of the modules in this list are part of the ansible.builtin collection and are available by default with Ansible, but some of them are part of other collections. For a list of collections, check the Ansible documentation . What you need to know about Ansible modules Learn how and when to develop custom modules for Ansible.

    [Nov 22, 2020] Programmable editor as sysadmin tool

    Highly recommended!
    Oct 05, 2020 | perlmonks.org

    likbez

    C( vi/vim, emacs, THE, etc ).

    There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

    Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

    FixHostsEquiv() {
    
    if   -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
          t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
          /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG
    
            if grep -s "^+$" /etc/hosts.equiv
            then
            ed - /etc/hosts.equiv <<- !
            g/^+$/d
            w
            q
            !
            fi
    else
            t_echo 2 "        No /etc/hosts.equiv -  PASSES CHECK"
            exit 1
    fi
    
    

    For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

    Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

    On the other hand, that gives you a chance to learn splice function ;-)

    If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

    The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

    [Nov 22, 2020] Which programming languages are useful for sysadmins, by Jonathan Roemer

    I am surprised that Perl is No.3. It should be no.1 as it is definitely superior to both shell and Python for the most sysadmin scripts and it has more commonality with bash (which remain the major language) than Python. Far more.
    It looks like Python as the language taught at the universities dominate because number of weak sysadmin, who just mentions it but actually do not used it, exceed the number of strong sysadmins (who really wrote at least one complex sysadmin script) by several orders of magnitude.
    Jul 24, 2020 | www.redhat.com
    What&#039;s your favorite programming/scripting language for sysadmin work?

    Life as a systems engineer is a process of continuous improvement. In the past few years, as software-defined-everything has started to overhaul how we work in the same way virtualization did, knowing how to write and debug software has been a critical skill for systems engineers. Whether you are automating a small, repetitive, manual task, writing daily reporting tools, or debugging a production outage, it is vital to choose the right tool for the job. Below, are a few programming languages that I think all systems engineers will find useful, and also some guidance for picking your next language to learn.

    Bash

    The old standby, Bash (and, to a certain extent, POSIX sh) is the go-to for many systems engineers. The quick access to system primitives makes it ideal for ad-hoc data transformations. Slap together curl and jq with some conditionals, and you've got everything from a basic health check to an automated daily reporting tool. However, once you get a few levels of iteration deep, or you're making multiple calls to jq , you probably want to pull out a more fully-featured programming language.

    Python

    Python's easy onboarding, wide range of libraries, and large community make it ideal for more demanding sysadmin tasks. Daily reports might start as a few hundred lines of Bash that are run first thing in the morning. Once this gets large enough, however, it makes sense to move this to Python. A quick import json for simple JSON object interaction, and import jinja2 for quickly templating out a daily HTML-formatted email.

    The languages your tools are built in

    One of the powers of open source is, of course, access to the source! However, it is hard to realize this value if you don't have an understanding of the languages these tools are built in. An understanding of Go makes digging into the Datadog or Kubernetes codebases much easier. Being familiar with the development and debugging tools for C and Perl allow you to quickly dig down into aberrant behavior.

    The new hotness

    Even if you don't have Go or Rust in your environment today, there's a good chance you'll start seeing these languages more often. Maybe your application developers are migrating over to Elixir. Keeping up with the evolution of our industry can frequently feel like a treadmill, but this can be mitigated somewhat by getting ahead of changes inside of your organization. Keep an ear to the ground and start learning languages before you need them, so you're always prepared.

    [ Download now: A sysadmin's guide to Bash scripting . ]

    [Nov 22, 2020] Read a file line by line

    Jul 07, 2020 | www.redhat.com

    Assume I have a file with a lot of IP addresses and want to operate on those IP addresses. For example, I want to run dig to retrieve reverse-DNS information for the IP addresses listed in the file. I also want to skip IP addresses that start with a comment (# or hashtag).

    I'll use fileA as an example. Its contents are:

    10.10.12.13  some ip in dc1
    10.10.12.14  another ip in dc2
    #10.10.12.15 not used IP
    10.10.12.16  another IP
    

    I could copy and paste each IP address, and then run dig manually:

    $> dig +short -x 10.10.12.13
    

    Or I could do this:

    $> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfile
    

    What if I want to swap the columns in fileA? For example, I want to put IP addresses in the right-most column so that fileA looks like this:

    some ip in dc1 10.10.12.13
    another ip in dc2 10.10.12.14
    not used IP #10.10.12.15
    another IP 10.10.12.16
    

    I run:

    $> while  read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA
    

    [Nov 22, 2020] Save terminal output to a file under Linux or Unix bash

    Apr 19, 2020 | www.cyberciti.biz
    PayPal / Bitcoin , or become a supporter using Patreon . Advertisements

    [Nov 22, 2020] Top 7 Linux File Compression and Archive Tools

    Notable quotes:
    "... It's currently support 188 file extensions. ..."
    Nov 22, 2020 | www.2daygeek.com

    6) How to Use the zstd Command to Compress and Decompress File on Linux

    Zstandard command stands for zstd, it is a real-time lossless data compression algorithm that provides high compression rates.

    It was created by Yann Collet on Facebook. It offers a wide range of options for compression and decompression.

    It also provides a special mode for small data known as dictionary summary.

    To compress the file using the zstd command.

    # zstd [Files To Be Compressed] -o [FileName.zst]
    

    To decompress the file using the zstd command.

    # zstd -d [FileName.zst]
    

    To decompress the file using the unzstd command.

    # unzstd [FileName.zst]
    
    7) How to Use the PeaZip Command to Compress and Decompress Files on Linux

    PeaZip is a free and open-source file archive utility, based on Open Source technologies of 7-Zip, p7zip, FreeArc, PAQ, and PEA projects.

    It's Cross-platform, full-featured and user-friendly alternative to WinRar and WinZip archive manager applications.

    It supports its native PEA archive format (featuring volume spanning, compression and authenticated encryption).

    It was developed for Windows and later added support for Unix/Linux as well. It's currently support 188 file extensions.

    [Nov 18, 2020] Why the lone wolf mentality is a sysadmin mistake by Scott McBrien

    Jul 10, 2019 | www.redhat.com

    If you have worked in system administration for a while, you've probably run into a system administrator who doesn't write anything down and keeps their work a closely-guarded secret. When I've run into administrators like this, I often ask why they do this, and the response is usually a joking, "Job security." Which, may not actually be all that joking.

    Don't be that person. I've worked in several shops, and I have yet to see someone "work themselves out of a job." What I have seen, however, is someone that can't take a week off without being called by the team repeatedly. Or, after this person left, I have seen a team struggle to detangle the mystery of what that person was doing, or how they were managing systems under their control.

    [Nov 04, 2020] Utility dirhist -- History of changes in one or several directories was posted on GitHub

    Designed to run from cron. Uses different, simpler approach than the etckeeper. Does not use GIT or any other version control system as they proved to be of questionable utility , unless there are multiple sysadmins on the server.

    Designed to run from cron. Uses different, simpler approach than the etckeeper (and does not have the connected with the usage of GIT problem with incorrect assignment of file attributes when reconverting system files).

    If it detects changed file it creates a new tar file for each analyzed directory. For example /etc, /root, and /boot

    Detects all "critical" changed file, diffs them with previous version, and produces report.

    All information by default in stored in /var/Dirhist_base. Directories to watch and files that are considered important are configurable via two config files dirhist_ignore.lst and dirhist_watch.lst which by default are located at the root of the /var/Dirhist_base tree ( as /var/Dirhist_base/dirhist_ignore.lst and /var/Dirhist_base/dirhist_watch.lst )

    You can specify any number of watched directories and within each directory any number of watched files and subdirectories. The format used is similar to YAML dictionaries, or Windows 3 ini files. If any of "watched" files or directories changes, the utility can email you the report to selected email addresses, to alert about those changes. Useful when several sysadmin manage the same server. Can also be used for checking, if changes made were documented in GIT or other version management system (this process can be automated using the utility admpolice.)

    [Nov 02, 2020] The Pros and Cons of Ansible - UpGuard

    Nov 02, 2020 | www.upguard.com

    Ansible has no notion of state. Since it doesn't keep track of dependencies, the tool simply executes a sequential series of tasks, stopping when it finishes, fails or encounters an error . For some, this simplistic mode of automation is desirable; however, many prefer their automation tool to maintain an extensive catalog for ordering (à la Puppet), allowing them to reach a defined state regardless of any variance in environmental conditions.

    [Nov 02, 2020] YAML for beginners - Enable Sysadmin

    Nov 02, 2020 | www.redhat.com

    YAML Ain't a Markup Language (YAML), and as configuration formats go, it's easy on the eyes. It has an intuitive visual structure, and its logic is pretty simple: indented bullet points inherit properties of parent bullet points.

    But this apparent simplicity can be deceptive.

    Great DevOps Downloads

    It's easy (and misleading) to think of YAML as just a list of related values, no more complex than a shopping list. There is a heading and some items beneath it. The items below the heading relate directly to it, right? Well, you can test this theory by writing a little bit of valid YAML.

    Open a text editor and enter this text, retaining the dashes at the top of the file and the leading spaces for the last two items:

    ---
    Store: Bakery
      Sourdough loaf
      Bagels
    

    Save the file as example.yaml (or similar).

    If you don't already have yamllint installed, install it:

    $ sudo dnf install -y yamllint
    

    A linter is an application that verifies the syntax of a file. The yamllint command is a great way to ensure your YAML is valid before you hand it over to whatever application you're writing YAML for (Ansible, for instance).

    Use yamllint to validate your YAML file:

    $ yamllint --strict shop.yaml || echo "Fail"
    $
    

    But when converted to JSON with a simple converter script , the data structure of this simple YAML becomes clearer:

    $ ~/bin/json2yaml.py shop.yaml
    {"Store": "Bakery Sourdough loaf Bagels"}
    

    Parsed without the visual context of line breaks and indentation, the actual scope of your data looks a lot different. The data is mostly flat, almost devoid of hierarchy. There's no indication that the sourdough loaf and bagels are children of the name of the store.

    [ Readers also liked: Ansible: IT automation for everybody ]

    How data is stored in YAML

    YAML can contain different kinds of data blocks:

    There's a third type called scalar , which is arbitrary data (encoded in Unicode) such as strings, integers, dates, and so on. In practice, these are the words and numbers you type when building mapping and sequence blocks, so you won't think about these any more than you ponder the words of your native tongue.

    When constructing YAML, it might help to think of YAML as either a sequence of sequences or a map of maps, but not both.

    YAML mapping blocks

    When you start a YAML file with a mapping statement, YAML expects a series of mappings. A mapping block in YAML doesn't close until it's resolved, and a new mapping block is explicitly created. A new block can only be created either by increasing the indentation level (in which case, the new block exists inside the previous block) or by resolving the previous mapping and starting an adjacent mapping block.

    The reason the original YAML example in this article fails to produce data with a hierarchy is that it's actually only one data block: the key Store has a single value of Bakery Sourdough loaf Bagels . YAML ignores the whitespace because no new mapping block has been started.

    Is it possible to fix the example YAML by prepending each sequence item with a dash and space?

    ---
    Store: Bakery
      - Sourdough loaf
      - Bagels
    

    Again, this is valid YAML, but it's still pretty flat:

    $ ~/bin/json2yaml.py shop.yaml
    {"Store": "Bakery - Sourdough loaf - Bagels"}
    

    The problem is that this YAML file opens a mapping block and never closes it. To close the Store block and open a new one, you must start a new mapping. The value of the mapping can be a sequence, but you need a key first.

    Here's the correct (and expanded) resolution:

    ---
    Store:
      Bakery:
        - 'Sourdough loaf'
        - 'Bagels'
      Cheesemonger:
        - 'Blue cheese'
        - 'Feta'
    

    In JSON, this resolves to:

    {"Store": {"Bakery": ["Sourdough loaf", "Bagels"],
    "Cheesemonger": ["Blue cheese", "Feta"]}}
    

    As you can see, this YAML directive contains one mapping ( Store ) to two child values ( Bakery and Cheesemonger ), each of which is mapped to a child sequence.

    YAML sequence blocks

    The same principles hold true should you start a YAML directive as a sequence. For instance, this YAML directive is valid:

    Flour
    Water
    Salt
    

    Each item is distinct when viewed as JSON:

    ["Flour", "Water", "Salt"]
    

    But this YAML file is not valid because it attempts to start a mapping block at an adjacent level to a sequence block :

    ---
    - Flour
    - Water
    - Salt
    Sugar: caster
    

    It can be repaired by moving the mapping block into the sequence:

    ---
    - Flour
    - Water
    - Salt
    - Sugar: caster
    

    You can, as always, embed a sequence into your mapping item:

    ---
    - Flour
    - Water
    - Salt
    - Sugar:
        - caster
        - granulated
        - icing
    

    Viewed through the lens of explicit JSON scoping, that YAML snippet reads like this:

    ["Flour", "Salt", "Water", {"Sugar": ["caster", "granulated", "icing"]}]
    

    [ A free guide from Red Hat: 5 steps to automate your business . ]

    YAML syntax

    If you want to comfortably write YAML, it's vital to be aware of its data structure. As you can tell, there's not much you have to remember. You know about mapping and sequence blocks, so you know everything you need have to work with. All that's left is to remember how they do and do not interact with one another. Happy coding! Check out these related articles on Enable Sysadmin Image 10 YAML tips for people who hate YAML

    [Nov 02, 2020] Deconstructing an Ansible playbook by Peter Gervase

    Oct 21, 2020 | www.redhat.com

    This article describes the different parts of an Ansible playbook starting with a very broad overview of what Ansible is and how you can use it. Ansible is a way to use easy-to-read YAML syntax to write playbooks that can automate tasks for you. These playbooks can range from very simple to very complex and one playbook can even be embedded in another.

    More about automation Installing httpd with a playbook

    Now that you have that base knowledge let's look at a basic playbook that will install the httpd package. I have an inventory file with two hosts specified, and I placed them in the web group:

    [root@ansible test]# cat inventory
    [web]
    ansibleclient.usersys.redhat.com
    ansibleclient2.usersys.redhat.com
    

    Let's look at the actual playbook to see what it contains:

    [root@ansible test]# cat httpd.yml
    ---
    - name: this playbook will install httpd
      hosts: web
      tasks:
        - name: this is the task to install httpd
          yum:
            name: httpd
            state: latest
    

    Breaking this down, you see that the first line in the playbook is --- . This lets you know that it is the beginning of the playbook. Next, I gave a name for the play. This is just a simple playbook with only one play, but a more complex playbook can contain multiple plays. Next, I specify the hosts that I want to target. In this case, I am selecting the web group, but I could have specified either ansibleclient.usersys.redhat.com or ansibleclient2.usersys.redhat.com instead if I didn't want to target both systems. The next line tells Ansible that you're going to get into the tasks that do the actual work. In this case, my playbook has only one task, but you can have multiple tasks if you want. Here I specify that I'm going to install the httpd package. The next line says that I'm going to use the yum module. I then tell it the name of the package, httpd , and that I want the latest version to be installed.

    [ Readers also liked: Getting started with Ansible ]

    When I run the httpd.yml playbook twice, I get this on the terminal:

    [root@ansible test]# ansible-playbook httpd.yml
    
    PLAY [this playbook will install httpd] ************************************************************************************************************
    
    TASK [Gathering Facts] *****************************************************************************************************************************
    ok: [ansibleclient.usersys.redhat.com]
    ok: [ansibleclient2.usersys.redhat.com]
    
    TASK [this is the task to install httpd] ***********************************************************************************************************
    changed: [ansibleclient2.usersys.redhat.com]
    changed: [ansibleclient.usersys.redhat.com]
    
    PLAY RECAP *****************************************************************************************************************************************
    ansibleclient.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
    ansibleclient2.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    
    [root@ansible test]# ansible-playbook httpd.yml
    
    PLAY [this playbook will install httpd] ************************************************************************************************************
    
    TASK [Gathering Facts] *****************************************************************************************************************************
    ok: [ansibleclient.usersys.redhat.com]
    ok: [ansibleclient2.usersys.redhat.com]
    
    TASK [this is the task to install httpd] ***********************************************************************************************************
    ok: [ansibleclient.usersys.redhat.com]
    ok: [ansibleclient2.usersys.redhat.com]
    
    PLAY RECAP *****************************************************************************************************************************************
    ansibleclient.usersys.redhat.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
    ansibleclient2.usersys.redhat.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    
    [root@ansible test]#
    

    Note that in both cases, I received an ok=2 , but in the second run of the playbook, nothing was changed. The latest version of httpd was already installed at that point.

    To get information about the various modules you can use in a playbook, you can use the ansible-doc command. For example:

    [root@ansible test]# ansible-doc yum
    > YUM    (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/yum.py)
    Installs, upgrade, downgrades, removes, and lists packages and groups with the `yum' package manager. This module only works on Python 2. If you require Python 3 support, see the [dnf] module.
    
      * This module is maintained by The Ansible Core Team
      * note: This module has a corresponding action plugin.
    < output truncated >
    

    It's nice to have a playbook that installs httpd , but to make it more flexible, you can use variables instead of hardcoding the package as httpd . To do that, you could use a playbook like this one:

    [root@ansible test]# cat httpd.yml
    ---
    - name: this playbook will install {{ myrpm }}
      hosts: web
      vars:
        myrpm: httpd
      tasks:
        - name: this is the task to install {{ myrpm }}
          yum:
            name: "{{ myrpm }}"
            state: latest
    

    Here you can see that I've added a section called "vars" and I declared a variable myrpm with the value of httpd . I then can use that myrpm variable in the playbook and adjust it to whatever I want to install. Also, because I've specified the RPM to install by using a variable, I can override what I have written in the playbook by specifying the variable on the command line by using -e :

    [root@ansible test]# cat httpd.yml
    ---
    - name: this playbook will install {{ myrpm }}
      hosts: web
      vars:
        myrpm: httpd
      tasks:
        - name: this is the task to install {{ myrpm }}
          yum:
            name: "{{ myrpm }}"
            state: latest
    [root@ansible test]# ansible-playbook httpd.yml -e "myrpm=at"
    
    PLAY [this playbook will install at] ***************************************************************************************************************
    
    TASK [Gathering Facts] *****************************************************************************************************************************
    ok: [ansibleclient.usersys.redhat.com]
    ok: [ansibleclient2.usersys.redhat.com]
    
    TASK [this is the task to install at] **************************************************************************************************************
    changed: [ansibleclient2.usersys.redhat.com]
    changed: [ansibleclient.usersys.redhat.com]
    
    PLAY RECAP *****************************************************************************************************************************************
    ansibleclient.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
    ansibleclient2.usersys.redhat.com : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
    
    [root@ansible test]#
    

    Another way to make the tasks more dynamic is to use loops . In this snippet, you can see that I have declared rpms as a list to have mailx and postfix . To use them, I use loop in my task:

     vars:
        rpms:
          - mailx
          - postfix
    
      tasks:
        - name: this will install the rpms
          yum:
            name: "{{ item }}"
            state: installed
          loop: "{{ rpms }}"
    
    

    You might have noticed that when these plays run, facts about the hosts are gathered:

    TASK [Gathering Facts] *****************************************************************************************************************************
    ok: [ansibleclient.usersys.redhat.com]
    ok: [ansibleclient2.usersys.redhat.com]
    


    These facts can be used as variables when you run the play. For example, you could have a motd.yml file that sets content like:

    "This is the system {{ ansible_facts['fqdn'] }}.
    This is a {{ ansible_facts['distribution'] }} version {{ ansible_facts['distribution_version'] }} system."
    

    For any system where you run that playbook, the correct fully-qualified domain name (FQDN), operating system distribution, and distribution version would get set, even without you manually defining those variables.

    [ Need more on Ansible? Take a free technical overview course from Red Hat. Ansible Essentials: Simplicity in Automation Technical Overview . ]

    Wrap up

    This was a quick introduction to how Ansible playbooks look, what the different parts do, and how you can get more information about the modules. Further information is available from Ansible documentation . Check out these related articles on Enable Sysadmin Image Easing into automation with Ansible It's easier than you think to get started automating your tasks with Ansible. This gentle introduction gives you the basics you need to begin streamlining your administrative life. Posted: September 19, 2019 Author: Jrg Kastning (Red Hat Accelerator, Sudoer) Image Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks. Posted: July 31, 2019 Author: Jrg Kastning (Red Hat Accelerator, Sudoer) Image How to navigate Ansible documentation Ansible's documentation can be daunting. Here's a tour that might help. Posted: August 21, 2019 Author: Brady Thompson (Red Hat Accelerator) Topics: Linux Linux Administration Ansible Peter Gervase

    I currently work as a Solutions Architect at Red Hat. I have been here for going on 14 years, moving around a bit over the years, working in front line support and consulting before my current role. In my free time, I enjoy spending time with my family, exercising, and woodworking. More about me Related Content Image Tricks and treats for sysadmins and ops Are you ready for the scary technology tricks that can haunt you as a sysadmin? Here are five treats to counter those tricks. Posted: October 30, 2020 Author: Bryant Son (Red Hat, Sudoer) Image Eight ways to protect SSH access on your system The Secure Shell is a critical tool in the administrator's arsenal. Here are eight ways you can better secure SSH, and some suggestions for basic SSH centralization. Posted: October 29, 2020 Author: Damon Garn Image Linux command basics: printf Use printf to format text or numbers. Posted: October 27, 2020 Author: Tyler Carrigan (Red Hat)

    [Nov 02, 2020] Utility usersync which synchronizes (one way) users and groups within given interval of UID (min and max) with the directory or selected remote server was posted

    Useful for provisioning multiple servers that use traditional authentication, and not LDAP and for synchronizing user accounts between multiple versions on Linux . Also can be used for "normalizing" servers after acquisition of another company, changing on the fly UID and GID on multiple servers, etc. Can also be used for provisioning computational nodes on small and medium HPC clusters that use traditional authentication instead of DHCP.

    [Oct 30, 2020] Utility msync -- rsync wrapper that allow using multiple connections for transferring compressed archives or sets of them orginized in the tree was posted

    Useful for transfer of sets of trees with huge files over WAN links. In case of huge archives they can be split into chunks of a certain size, for example 5TB and orgnized into a directory. Files are sorted into N piles, where N is specified as parameter, and each pile transmitted vi own TCP connection. useful for transmitted over WAN lines with high latency. I achieved on WAN links with 100Ms latency results comparable with Aspera using 8 channels of transmission.

    [Oct 27, 2020] Utility emergency_shutdown was posted on GitHub

    Useful for large RAID5 arrays without spare drive, or other RAID configurations with limited redundancy and critical data stored. Currently works with Dell DRAC only, which should be configured for passwordless ssh login from the server that runs this utility. Detects that a disk in RAID 5 array failed, informs the most recent users (default is those who login during the two months), and then shuts down the server, if not cancelled after the "waiting" period (default is five days).

    [Oct 19, 2020] The utility dormant_user_stats was posted on GitHub

    The utility lists all users who were inactive for the specified number of days (default is 365). Calculates I-nodes usage too. Can execute simple commands for each dormant user (lock or delete account) and generates a text file with the list (one user per line) that can be used for more complex operations.

    Use

    dormant_user_stats -h

    for more information

    [Oct 07, 2020] Perl module for automating the modification of config files?

    Oct 07, 2020 | perlmonks.org

    I want to make it easier to modify configuration files. For example, let's day I want to edit a postfix config file according to the directions here.

    So I started writing simple code in a file that could be interpreted by perl to make the changes for me with one command per line:

    uc mail_owner # "uc" is the command for "uncomment" uc hostname cv hostname {{fqdn}} # "cv" is the command for "change value", {{fqdn} + } is replaced with appropriate value ... [download]

    You get the idea. I started writing some code to interpret my config file modification commands and then realized someone had to have tackled this problem before. I did a search on metacpan but came up empty. Anyone familiar with this problem space and can help point me in the right direction?


    by likbez on Oct 05, 2020 at 03:16 UTC Reputation: 2

    There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

    Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

    FixHostsEquiv() { if [ -f /etc/hosts.equiv -a -s /etc/hosts.equiv ]; then t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a co + py..." /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG if grep -s "^+$" /etc/hosts.equiv then ed - /etc/hosts.equiv <<- ! g/^+$/d w q ! fi else t_echo 2 " No /etc/hosts.equiv - PASSES CHECK" exit 1 fi [download]

    For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

    Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

    On the other hand, that gives you a chance to learn splice function ;-)

    If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then to create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that this diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

    The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

    [Oct 05, 2020] Modular Perl in Red Hat Enterprise Linux 8 - Red Hat Developer

    Notable quotes:
    "... perl-DBD-SQLite ..."
    "... perl-DBD-SQLite:1.58 ..."
    "... perl-libwww-perl ..."
    "... multi-contextual ..."
    Oct 05, 2020 | developers.redhat.com

    Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019

    Red Hat Enterprise Linux 8 comes with modules as a packaging concept that allows system administrators to select the desired software version from multiple packaged versions. This article will show you how to manage Perl as a module.

    Installing from a default stream

    Let's install Perl:

    # yum --allowerasing install perl
    Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
    Dependencies resolved.
    ==========================================================================================
     Package                       Arch    Version                Repository             Size
    ==========================================================================================
    Installing:
     perl                          x86_64  4:5.26.3-416.el8       rhel-8.0.z-appstream   72 k
    Installing dependencies:
    [ ]
    Transaction Summary
    ==========================================================================================
    Install  147 Packages
    
    Total download size: 21 M
    Installed size: 59 M
    Is this ok [y/N]: y
    [ ]
      perl-threads-shared-1.58-2.el8.x86_64                                                   
    
    Complete!
    

    Next, check which Perl you have:

    $ perl -V:version
    version='5.26.3';
    

    You have 5.26.3 Perl version. This is the default version supported for the next 10 years and, if you are fine with it, you don't have to know anything about modules. But what if you want to try a different version?

    Everything you need to grow your career.

    With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

    SIGN UP Discovering streams

    Let's find out what Perl modules are available using the yum module list command:

    # yum module list
    Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
    [ ]
    Name                 Stream           Profiles     Summary
    [ ]
    parfait              0.5              common       Parfait Module
    perl                 5.24             common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                       ules
    perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
    perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
    perl-DBD-SQLite      1.58 [d]         common [d]   SQLite DBI driver
    perl-DBI             1.641 [d]        common [d]   A database access API for Perl
    perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
    perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
    php                  7.2 [d]          common [d],  PHP scripting language
                                          devel, minim
                                          al
    [ ]
    

    Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called streams in the modularity world, and they denote an independent variant, usually a different version, of the same software stack. The [d] flag marks a default stream. That means if you do not explicitly enable a different stream, the default one will be used. That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.

    Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux 7, which was running in the rh-perl524 software collection environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24 on Red Hat Enterprise Linux 8.

    Enabling a Stream

    First, switch the Perl module to the 5.24 stream:

    # yum module enable perl:5.24
    Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
    Problems in request:
    Modular dependency problems with Defaults:
    
     Problem 1: conflicting requests
      - module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
      - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
      - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
     Problem 2: conflicting requests
      - module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
      - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
      - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
    Dependencies resolved.
    ==========================================================================================
     Package              Arch                Version              Repository            Size
    ==========================================================================================
    Enabling module streams:
     perl                                     5.24
    
    Transaction Summary
    ==========================================================================================
    
    Is this ok [y/N]: y
    Complete!
    
    Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
    for details)
    

    Here you can see a warning that the freeradius:3.0 stream is not compatible with perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules are compatible with all other modules.

    Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is another warning about installed packages. The last warning means that the system still can have installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.

    Changing modules and changing packages are two separate phases. You can fix it by synchronizing a distribution content like this:

    # yum --allowerasing distrosync
    Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
    Modular dependency problems:
    
     Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
      - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
      - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
      - conflicting requests
     Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
      - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
      - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
      - conflicting requests
    Dependencies resolved.
    ==========================================================================================
     Package           Arch   Version                              Repository            Size
    ==========================================================================================
    [ ]
    Downgrading:
     perl              x86_64 4:5.24.4-403.module+el8+2770+c759b41a
                                                                   rhel-8.0.z-appstream 6.1 M
    [ ]
    Transaction Summary
    ==========================================================================================
    Upgrade    69 Packages
    Downgrade  66 Packages
    
    Total download size: 20 M
    Is this ok [y/N]: y
    [ ]
    Complete!
    

    And try the perl command again:

    $ perl -V:version
    version='5.24.4';
    

    Great! It works. We switched to a different Perl version, and the different Perl is still invoked with the perl command and is installed to a standard path ( /usr/bin/perl ). No scl enable incantation is needed, in contrast to the software collections.

    You could notice the repeated warning about FreeRADIUS. A future YUM update is going to clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are compatible with any Perl stream.

    Dependent modules

    Let's say the old application mentioned before is using DBD::SQLite Perl module. (This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN module, so give a try:

    # yum --allowerasing install 'perl(DBD::SQLite)'
    [ ]
    Dependencies resolved.
    ==========================================================================================
     Package          Arch    Version                             Repository             Size
    ==========================================================================================
    Installing:
     perl-DBD-SQLite  x86_64  1.58-1.module+el8+2519+e351b2a7     rhel-8.0.z-appstream  186 k
    Installing dependencies:
     perl-DBI         x86_64  1.641-2.module+el8+2701+78cee6b5    rhel-8.0.z-appstream  739 k
    Enabling module streams:
     perl-DBD-SQLite          1.58
     perl-DBI                 1.641
    
    Transaction Summary
    ==========================================================================================
    Install  2 Packages
    
    Total download size: 924 k
    Installed size: 2.3 M
    Is this ok [y/N]: y
    [ ]
    Installed:
      perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
      perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64
    
    Complete!
    

    Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the streams and installing the packages.

    Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the Modularity modules and compare it with what you saw the first time:

    # yum module list
    [ ]
    parfait              0.5              common       Parfait Module
    perl                 5.24 [e]         common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                       ules
    perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
    perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
    perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
    perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
    perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
    perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
    php                  7.2 [d]          common [d],  PHP scripting language
                                          devel, minim
                                          al
    [ ]
    

    Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26, which would otherwise be a default one ( [d] ). Other enabled Modularity modules are perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite. These two modules have no other streams.

    In general, any module can have multiple streams. At most, one stream of a module can be the default one. And, at most, one stream of a module can be enabled. An enabled stream takes precedence over a default one. If there is no enabled or a default stream, content of the module is unavailable.

    If, for some reason, you need to disable a stream, even a default one, you do that with yum module disable MODULE:STREAM command.

    Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN module now. Let's create a test database, a foo table inside with one textual column called bar , and let's store a row with Hello text there:

    $ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test});
        $dbh->do(q{CREATE TABLE foo (bar text)});
        $sth=$dbh->prepare(q{INSERT INTO foo(bar) VALUES(?)});
        $sth->execute(q{Hello})'
    

    Next, verify the Hello string was indeed stored by querying the database:

    $ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
    Hello
    

    It seems DBD::SQLite works.

    Non-modular packages may not work with non-default streams

    So far, everything is great and working. Now I will show what happens if you try to install an RPM package that has not been modularized and is thus compatible only with the default Perl, perl:5.26:

    # yum --allowerasing install 'perl(LWP)'
    [ ]
    Error: 
     Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
      - cannot install the best candidate for the job
      - package perl-libs-4:5.26.3-416.el8.i686 is excluded
      - package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
    (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
    

    Yum will report an error about perl-libwww-perl RPM package being incompatible. The LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the packages from perl:5.26 stream are masked and become unavailable. However, this masking does not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that were not modularized yet. If you need some of them to be available and compatible with a non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to contact Red Hat support team with your request.

    Resetting a module

    Let's say you tested your old application and now you want to find out if it works with the new Perl 5.26.

    To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from an enabled stream back to a default or to a yet another non-default stream is not straightforward. You'll need to perform a module reset:

    # yum module reset perl
    [ ]
    Dependencies resolved.
    ==========================================================================================
     Package              Arch                Version              Repository            Size
    ==========================================================================================
    Resetting module streams:
     perl                                     5.24                                           
    
    Transaction Summary
    ==========================================================================================
    
    Is this ok [y/N]: y
    Complete!
    

    Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24 RPM packages with 5.26 ones:

    # yum --allowerasing distrosync
    [ ]
    Transaction Summary
    ==========================================================================================
    Upgrade    65 Packages
    Downgrade  71 Packages
    
    Total download size: 22 M
    Is this ok [y/N]: y
    [ ]
    

    After that, you can check the Perl version:

    $ perl -V:version
    version='5.26.3';
    

    And, check the enabled modules:

    # yum module list
    [ ]
    parfait              0.5              common       Parfait Module
    perl                 5.24             common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                          minimal      e
    perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                       ules
    perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
    perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
    perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
    perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
    perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
    perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
    php                  7.2 [d]          common [d],  PHP scripting language
                                          devel, minim
                                          al
    [ ]
    

    As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641 streams remained enabled. It does not matter much because those are the only streams. Nonetheless, you can reset them back using yum module reset perl-DBI perl-DBD-SQLite if you like.

    Multi-context streams

    What happened with the DBD::SQLite? It's still there and working:

    $ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
    Hello
    

    That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls. We call these modules multi-contextual . That's the case for perl-DBD-SQLite or perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you want to see these low-level details, such which contexts are available, which dependencies are required, or which packages are contained in a module, you can use the yum module info MODULE:STREAM command.

    Afterword

    I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise Linux 8 that enables us to provide you with multiple versions of software on top of one Linux platform. If you need more details, please read documentation accompanying the product (namely, user-space component management document and yum(8) manual page ) or ask the support team for help.

    [Sep 22, 2020] Taming the tar command -- Tips for managing backups in Linux by Gabby Taylor

    Sep 18, 2020 | www.redhat.com
    How to append or add files to a backup

    In this example, we add onto a backup backup.tar . This allows you to add additional files to the pre-existing backup backup.tar .

    # tar -rvf backup.tar /path/to/file.xml
    

    Let's break down these options:

    -r - Append to archive
    -v - Verbose output
    -f - Name the file

    How to split a backup into smaller backups

    In this example, we split the existing backup into smaller archived files. You can pipe the tar command into the split command.

    # tar cvf - /dir | split --bytes=200MB - backup.tar
    

    Let's break down these options:

    -c - Create the archive
    -v - Verbose output
    -f - Name the file

    In this example, the dir/ is the directory that you want to split the backup content from. We are making 200MB backups from the /dir folder.

    How to check the integrity of a tar.gz backup

    In this example, we check the integrity of an existing tar archive.

    To test the gzip file is not corrupt:

    #gunzip -t backup.tar.gz
    

    To test the tar file content's integrity:

    #gunzip -c backup.tar.gz | tar t > /dev/null
    

    OR

    #tar -tvWF backup.tar
    

    Let's break down these options:

    -W - Verify an archive file
    -t - List files of archived file
    -v - Verbose output

    Use pipes and greps to locate content

    In this example, we use pipes and greps to locate content. The best option is already made for you. Zgrep can be utilized for gzip archives.

    #zgrep <keyword> backup.tar.gz
    

    You can also use the zcat command. This shows the content of the archive, then pipes that output to a grep .

    #zcat backup.tar.gz | grep <keyword>
    

    Egrep is a great one to use just for regular file types.

    [Sep 05, 2020] documentation - How do I get the list of exit codes (and-or return codes) and meaning for a command-utility

    Sep 05, 2020 | unix.stackexchange.com

    What exit code should I use?

    There is no "recipe" to get the meanings of an exit status of a given terminal command.

    My first attempt would be the manpage:

    user@host:~# man ls 
       Exit status:
           0      if OK,
    
           1      if minor problems (e.g., cannot access subdirectory),
    
           2      if serious trouble (e.g., cannot access command-line argument).
    

    Second : Google . See wget as an example.

    Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .

    Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs (64-78) can be found in /usr/include/sysexits.h (or: man sysexits on BSD):

    0   /* successful termination */
    64  /* base value for error messages */
    64  /* command line usage error */
    65  /* data format error */
    66  /* cannot open input */
    67  /* addressee unknown */
    68  /* host name unknown */
    69  /* service unavailable */
    70  /* internal software error */
    71  /* system error (e.g., can't fork) */
    72  /* critical OS file missing */
    73  /* can't create (user) output file */
    74  /* input/output error */
    75  /* temp failure; user is invited to retry */
    76  /* remote error in protocol */
    77  /* permission denied */
    78  /* configuration error */
    /* maximum listed value */
    

    The above list allocates previously unused exit codes from 64-78. The range of unallotted exit codes will be further restricted in the future.

    However above values are mainly used in sendmail and used by pretty much nobody else, so they aren't anything remotely close to a standard (as pointed by @Gilles ).

    In shell the exit status are as follow (based on Bash):

    According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters.

    Please note that out of range exit values can result in unexpected exit codes (e.g. exit 3809 gives an exit code of 225, 3809 % 256 = 225).

    See:

    You will have to look into the code/documentation. However the thing that comes closest to a "standardization" is errno.h share improve this answer follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold badge 17 17 silver badges 25 25 bronze badges

    PSkocik ,

    thanks for pointing the header file.. tried looking into the documentation of a few utils.. hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at 9:13

    [Aug 10, 2020] How to Run and Control Background Processes on Linux

    Aug 10, 2020 | www.howtogeek.com

    How to Run and Control Background Processes on Linux DAVE MCKAY @thegurkha
    SEPTEMBER 24, 2019, 8:00AM EDT

    A shell environment on a Linux computer.
    Fatmawati Achmad Zaenuri/Shutterstock.com

    Use the Bash shell in Linux to manage foreground and background processes. You can use Bash's job control functions and signals to give you more flexibility in how you run commands. We show you how.

    How to Speed Up a Slow PC

    https://imasdk.googleapis.com/js/core/bridge3.401.2_en.html#goog_863166184 All About Processes

    Whenever a program is executed in a Linux or Unix-like operating system, a process is started. "Process" is the name for the internal representation of the executing program in the computer's memory. There is a process for every active program. In fact, there is a process for nearly everything that is running on your computer. That includes the components of your graphical desktop environment (GDE) such as GNOME or KDE , and system daemons that are launched at start-up.

    Why nearly everything that is running? Well, Bash built-ins such as cd , pwd , and alias do not need to have a process launched (or "spawned") when they are run. Bash executes these commands within the instance of the Bash shell that is running in your terminal window. These commands are fast precisely because they don't need to have a process launched for them to execute. (You can type help in a terminal window to see the list of Bash built-ins.)

    Processes can be running in the foreground, in which case they take over your terminal until they have completed, or they can be run in the background. Processes that run in the background don't dominate the terminal window and you can continue to work in it. Or at least, they don't dominate the terminal window if they don't generate screen output.

    A Messy Example

    We'll start a simple ping trace running . We're going to ping the How-To Geek domain. This will execute as a foreground process.

    ping www.howtogeek.com
    

    ping www.howtogeek.com in a terminal window

    We get the expected results, scrolling down the terminal window. We can't do anything else in the terminal window while ping is running. To terminate the command hit Ctrl+C .

    Ctrl+C
    

    ping trace output in a terminal window

    The visible effect of the Ctrl+C is highlighted in the screenshot. ping gives a short summary and then stops.

    Let's repeat that. But this time we'll hit Ctrl+Z instead of Ctrl+C . The task won't be terminated. It will become a background task. We get control of the terminal window returned to us.

    ping www.howtogeek.com
    
    Ctrl+Z
    

    effect of Ctrl+Z on a command running in a terminal window

    The visible effect of hitting Ctrl+Z is highlighted in the screenshot.

    This time we are told the process is stopped. Stopped doesn't mean terminated. It's like a car at a stop sign. We haven't scrapped it and thrown it away. It's still on the road, stationary, waiting to go. The process is now a background job .

    The jobs command will list the jobs that have been started in the current terminal session. And because jobs are (inevitably) processes, we can also use the ps command to see them. Let's use both commands and compare their outputs. We'll use the T option (terminal) option to only list the processes that are running in this terminal window. Note that there is no need to use a hyphen - with the T option.

    jobs
    
    ps T
    

    jobs command in a terminal window

    The jobs command tells us:

    The ps command tells us:

    These are common values for the STAT column:

    The value in the STAT column can be followed by one of these extra indicators:

    We can see that Bash has a state of Ss . The uppercase "S" tell us the Bash shell is sleeping, and it is interruptible. As soon as we need it, it will respond. The lowercase "s" tells us that the shell is a session leader.

    The ping command has a state of T . This tells us that ping has been stopped by a job control signal. In this example, that was the Ctrl+Z we used to put it into the background.

    The ps T command has a state of R , which stands for running. The + indicates that this process is a member of the foreground group. So the ps T command is running in the foreground.

    The bg Command

    The bg command is used to resume a background process. It can be used with or without a job number. If you use it without a job number the default job is brought to the foreground. The process still runs in the background. You cannot send any input to it.

    If we issue the bg command, we will resume our ping command:

    bg
    

    bg in a terminal window

    The ping command resumes and we see the scrolling output in the terminal window once more. The name of the command that has been restarted is displayed for you. This is highlighted in the screenshot.

    resumed ping background process with output in a terminal widow

    But we have a problem. The task is running in the background and won't accept input. So how do we stop it? Ctrl+C doesn't do anything. We can see it when we type it but the background task doesn't receive those keystrokes so it keeps pinging merrily away.

    Background task ignoring Ctrl+C in a terminal window

    In fact, we're now in a strange blended mode. We can type in the terminal window but what we type is quickly swept away by the scrolling output from the ping command. Anything we type takes effect in the foregound.

    To stop our background task we need to bring it to the foreground and then stop it.

    The fg Command

    The fg command will bring a background task into the foreground. Just like the bg command, it can be used with or without a job number. Using it with a job number means it will operate on a specific job. If it is used without a job number the last command that was sent to the background is used.

    If we type fg our ping command will be brought to the foreground. The characters we type are mixed up with the output from the ping command, but they are operated on by the shell as if they had been entered on the command line as usual. And in fact, from the Bash shell's point of view, that is exactly what has happened.

    fg
    

    fg command mixed in with the output from ping in a terminal window

    And now that we have the ping command running in the foreground once more, we can use Ctrl+C to kill it.

    Ctrl+C
    

    Ctrl+C stopping the ping command in a terminal window

    We Need to Send the Right Signals

    That wasn't exactly pretty. Evidently running a process in the background works best when the process doesn't produce output and doesn't require input.

    But, messy or not, our example did accomplish:

    When you use Ctrl+C and Ctrl+Z , you are sending signals to the process. These are shorthand ways of using the kill command. There are 64 different signals that kill can send. Use kill -l at the command line to list them. kill isn't the only source of these signals. Some of them are raised automatically by other processes within the system

    Here are some of the commonly used ones.

    We must use the kill command to issue signals that do not have key combinations assigned to them.

    Further Job Control

    A process moved into the background by using Ctrl+Z is placed in the stopped state. We have to use the bg command to start it running again. To launch a program as a running background process is simple. Append an ampersand & to the end of the command line.

    Although it is best that background processes do not write to the terminal window, we're going to use examples that do. We need to have something in the screenshots that we can refer to. This command will start an endless loop as a background process:

    while true; do echo "How-To Geek Loop Process"; sleep 3; done &

    while true; do echo "How-To Geek Loop Process"; sleep 3; done & in a terminal window

    We are told the job number and process ID id of the process. Our job number is 1, and the process id is 1979. We can use these identifiers to control the process.

    The output from our endless loop starts to appear in the terminal window. As before, we can use the command line but any commands we issue are interspersed with the output from the loop process.

    ls

    output of the background loop process interspersed with output from other commands

    To stop our process we can use jobs to remind ourselves what the job number is, and then use kill .

    jobs reports that our process is job number 1. To use that number with kill we must precede it with a percent sign % .

    jobs
    
    kill %1
    

    jobs and kill %1 in a terminal window

    kill sends the SIGTERM signal, signal number 15, to the process and it is terminated. When the Enter key is next pressed, a status of the job is shown. It lists the process as "terminated." If the process does not respond to the kill command you can take it up a notch. Use kill with SIGKILL , signal number 9. Just put the number 9 between the kill command the job number.

    kill 9 %1
    
    Things We've Covered

    RELATED: How to Kill Processes From the Linux Terminal

    [Jul 30, 2020] ports tree

    Jul 30, 2020 | opensource.com

    . On Mac, use Homebrew .

    For example, on RHEL or Fedora:

    $ sudo dnf install tmux
    
    Start tmux

    To start tmux, open a terminal and type:

    $ tmux
    

    When you do this, the obvious result is that tmux launches a new shell in the same window with a status bar along the bottom. There's more going on, though, and you can see it with this little experiment. First, do something in your current terminal to help you tell it apart from another empty terminal:

    $ echo hello
    hello

    Now press Ctrl+B followed by C on your keyboard. It might look like your work has vanished, but actually, you've created what tmux calls a window (which can be, admittedly, confusing because you probably also call the terminal you launched a window ). Thanks to tmux, you actually have two windows open, both of which you can see listed in the status bar at the bottom of tmux. You can navigate between these two windows by index number. For instance, press Ctrl+B followed by 0 to go to the initial window:

    $ echo hello
    hello

    Press Ctrl+B followed by 1 to go to the first new window you created.

    You can also "walk" through your open windows using Ctrl+B and N (for Next) or P (for Previous).

    The tmux trigger and commands More Linux resources The keyboard shortcut Ctrl+B is the tmux trigger. When you press it in a tmux session, it alerts tmux to "listen" for the next key or key combination that follows. All tmux shortcuts, therefore, are prefixed with Ctrl+B .

    You can also access a tmux command line and type tmux commands by name. For example, to create a new window the hard way, you can press Ctrl+B followed by : to enter the tmux command line. Type new-window and press Enter to create a new window. This does exactly the same thing as pressing Ctrl+B then C .

    Splitting windows into panes

    Once you have created more than one window in tmux, it's often useful to see them all in one window. You can split a window horizontally (meaning the split is horizontal, placing one window in a North position and another in a South position) or vertically (with windows located in West and East positions).

    You can split windows that have been split, so the layout is up to you and the number of lines in your terminal.

    tmux_golden-ratio.jpg

    (Seth Kenlon, CC BY-SA 4.0 )

    Sometimes things can get out of hand. You can adjust a terminal full of haphazardly split panes using these quick presets:

    Switching between panes

    To get from one pane to another, press Ctrl+B followed by O (as in other ). The border around the pane changes color based on your position, and your terminal cursor changes to its active state. This method "walks" through panes in order of creation.

    Alternatively, you can use your arrow keys to navigate to a pane according to your layout. For example, if you've got two open panes divided by a horizontal split, you can press Ctrl+B followed by the Up arrow to switch from the lower pane to the top pane. Likewise, Ctrl+B followed by the Down arrow switches from the upper pane to the lower one.

    Running a command on multiple hosts with tmux

    Now that you know how to open many windows and divide them into convenient panes, you know nearly everything you need to know to run one command on multiple hosts at once. Assuming you have a layout you're happy with and each pane is connected to a separate host, you can synchronize the panes such that the input you type on your keyboard is mirrored in all panes.

    To synchronize panes, access the tmux command line with Ctrl+B followed by : , and then type setw synchronize-panes .

    Now anything you type on your keyboard appears in each pane, and each pane responds accordingly.

    Download our cheat sheet

    It's relatively easy to remember Ctrl+B to invoke tmux features, but the keys that follow can be difficult to remember at first. All built-in tmux keyboard shortcuts are available by pressing Ctrl+B followed by ? (exit the help screen with Q ). However, the help screen can be a little overwhelming for all its options, none of which are organized by task or topic. To help you remember the basic features of tmux, as well as many advanced functions not covered in this article, we've developed a tmux cheatsheet . It's free to download, so get your copy today.

    Download our tmux cheat sheet today! How tmux sparks joy in your Linux terminal Organize your terminal like Marie Kondo with tmux. S. Hayes Use tmux to create the console of your dreams You can do a lot with tmux, especially when you add tmuxinator to the mix. Check them out in the fifteenth in our series on 20 ways to be more productive with open source in 2020. Kevin Sonney (Correspondent) Customizing my Linux terminal with tmux and Git Set up your console so you always know where you are and what to do next. Moshe Zadka (Correspondent) Topics Linux

    [Jul 29, 2020] Linux Commands- jobs, bg, and fg by Tyler Carrigan

    Jul 23, 2020 | www.redhat.com
    Image

    Photo by Andrea Piacquadio from Pexels

    More Linux resources

    In this quick tutorial, I want to look at the jobs command and a few of the ways that we can manipulate the jobs running on our systems. In short, controlling jobs lets you suspend and resume processes started in your Linux shell.

    Jobs

    The jobs command will list all jobs on the system; active, stopped, or otherwise. Before I explore the command and output, I'll create a job on my system.

    I will use the sleep job as it won't change my system in any meaningful way.

    [tcarrigan@rhel ~]$ sleep 500
    ^Z
    [1]+  Stopped                 sleep 500
    

    First, I issued the sleep command, and then I received the Job number [1]. I then immediately stopped the job by using Ctl+Z . Next, I run the jobs command to view the newly created job:

    [tcarrigan@rhel ~]$ jobs
    [1]+  Stopped                 sleep 500
    

    You can see that I have a single stopped job identified by the job number [1] .

    Other options to know for this command include:

    Background

    Next, I'll resume the sleep job in the background. To do this, I use the bg command. Now, the bg command has a pretty simple syntax, as seen here:

    bg [JOB_SPEC]
    

    Where JOB_SPEC can be any of the following:

    NOTE : bg and fg operate on the current job if no JOB_SPEC is provided.

    I can move this job to the background by using the job number [1] .

    [tcarrigan@rhel ~]$ bg %1
    [1]+ sleep 500 &
    

    You can see now that I have a single running job in the background.

    [tcarrigan@rhel ~]$ jobs
    [1]+  Running                 sleep 500 &
    
    Foreground

    Now, let's look at how to move a background job into the foreground. To do this, I use the fg command. The command syntax is the same for the foreground command as with the background command.

    fg [JOB_SPEC]
    

    Refer to the above bullets for details on JOB_SPEC.

    I have started a new sleep in the background:

    [tcarrigan@rhel ~]$ sleep 500 &
    [2] 5599
    

    Now, I'll move it to the foreground by using the following command:

    [tcarrigan@rhel ~]$ fg %2
    sleep 500
    

    The fg command has now brought my system back into a sleep state.

    The end

    While I realize that the jobs presented here were trivial, these concepts can be applied to more than just the sleep command. If you run into a situation that requires it, you now have the knowledge to move running or stopped jobs from the foreground to background and back again.

    [Jul 29, 2020] 10 Linux commands to know the system - nixCraft

    Jul 29, 2020 | www.cyberciti.biz

    10 Linux commands to know the system

    Open the terminal application and then start typing these commands to know your Linux desktop or cloud server/VM.

    1. free get free and used memory

    Are you running out of memory? Use the free command to show the total amount of free and used physical (RAM) and swap memory in the Linux system. It also displays the buffers and caches used by the kernel:
    free
    # human readable outputs
    free -h
    # use the cat command to find geeky details
    cat /proc/meminfo

    Linux display amount of free and used memory in the system
    However, the free command will not give information about memory configurations, maximum supported memory by the Linux server , and Linux memory speed . Hence, we must use the dmidecode command:
    sudo dmidecode -t memory
    Want to determine the amount of video memory under Linux, try:
    lspci | grep -i vga
    glxinfo | egrep -i 'device|memory'

    See " Linux Find Out Video Card GPU Memory RAM Size Using Command Line " and " Linux Check Memory Usage Using the CLI and GUI " for more information.

    2. hwinfo probe for hardware

    We can quickly probe for the hardware present in the Linux server or desktop:
    # Find detailed info about the Linux box
    hwinfo
    # Show only a summary #
    hwinfo --short
    # View all disks #
    hwinfo --disk
    # Get an overview #
    hwinfo --short --block
    # Find a particular disk #
    hwinfo --disk --only /dev/sda
    hwinfo --disk --only /dev/sda
    # Try 4 graphics card ports for monitor data #
    hwprobe=bios.ddc.ports=4 hwinfo --monitor
    # Limit info to specific devices #
    hwinfo --short --cpu --disk --listmd --gfxcard --wlan --printer

    hwinfo
    Alternatively, you may find the lshw command and inxi command useful to display your Linux hardware information:
    sudo lshw -short
    inxi -Fxz

    inxi
    inxi is system information tool to get system configurations and hardware. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, gcc version(s), Processes, RAM usage, and a wide variety of other useful information [Click to enlarge]
    3. id know yourself

    Display Linux user and group information for the given USER name. If user name omitted show information for the current user:
    id

    uid=1000(vivek) gid=1000(vivek) groups=1000(vivek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),115(lpadmin),116(sambashare),998(lxd)
    

    See who is logged on your Linux server:
    who
    who am i

    4. lsblk list block storage devices

    All Linux block devices give buffered access to hardware devices and allow reading and writing blocks as per configuration. Linux block device has names. For example, /dev/nvme0n1 for NVMe and /dev/sda for SCSI devices such as HDD/SSD. But you don't have to remember them. You can list them easily using the following syntax:
    lsblk
    # list only #
    lsblk -l
    # filter out loop devices using the grep command #
    lsblk -l | grep '^loop'

    NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    md0             9:0    0   3.7G  0 raid1 /boot
    md1             9:1    0 949.1G  0 raid1 
    md1_crypt     253:0    0 949.1G  0 crypt 
    nixcraft-swap 253:1    0 119.2G  0 lvm   [SWAP]
    nixcraft-root 253:2    0 829.9G  0 lvm   /
    nvme1n1       259:0    0 953.9G  0 disk  
    nvme1n1p1     259:1    0   953M  0 part  
    nvme1n1p2     259:2    0   3.7G  0 part  
    nvme1n1p3     259:3    0 949.2G  0 part  
    nvme0n1       259:4    0 953.9G  0 disk  
    nvme0n1p1     259:5    0   953M  0 part  /boot/efi
    nvme0n1p2     259:6    0   3.7G  0 part  
    nvme0n1p3     259:7    0 949.2G  0 part
    
    5. lsb_release Linux distribution information

    Want to get distribution-specific information such as, description of the currently installed distribution, release number and code name:
    lsb_release -a
    No LSB modules are available.

    Distributor ID:	Ubuntu
    Description:	Ubuntu 20.04.1 LTS
    Release:	20.04
    Codename:	focal
    
    6. lscpu display info about the CPUs

    The lscpu command gathers and displays CPU architecture information in an easy-to-read format for humans including various CPU bugs:
    lscpu

    Architecture:                    x86_64
    CPU op-mode(s):                  32-bit, 64-bit
    Byte Order:                      Little Endian
    Address sizes:                   39 bits physical, 48 bits virtual
    CPU(s):                          12
    On-line CPU(s) list:             0-11
    Thread(s) per core:              2
    Core(s) per socket:              6
    Socket(s):                       1
    NUMA node(s):                    1
    Vendor ID:                       GenuineIntel
    CPU family:                      6
    Model:                           158
    Model name:                      Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
    Stepping:                        13
    CPU MHz:                         976.324
    CPU max MHz:                     4600.0000
    CPU min MHz:                     800.0000
    BogoMIPS:                        5199.98
    Virtualization:                  VT-x
    L1d cache:                       192 KiB
    L1i cache:                       192 KiB
    L2 cache:                        1.5 MiB
    L3 cache:                        12 MiB
    NUMA node0 CPU(s):               0-11
    Vulnerability Itlb multihit:     KVM: Mitigation: Split huge pages
    Vulnerability L1tf:              Not affected
    Vulnerability Mds:               Not affected
    Vulnerability Meltdown:          Not affected
    Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
    Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
    Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
    Vulnerability Srbds:             Mitigation; TSX disabled
    Vulnerability Tsx async abort:   Mitigation; TSX disabled
    Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_g
                                     ood nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes x
                                     save avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep 
                                     bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
    

    Cpu can be listed using the lshw command too:
    sudo lshw -C cpu

    7. lstopo display hardware topology

    Want to see the topology of the Linux server or desktop? Try:
    lstopo
    lstopo-no-graphics

    Linux show the topology of the system command
    You will see information about:

    1. NUMA memory nodes
    2. shared caches
    3. CPU packages
    4. Processor cores
    5. processor "threads" and more
    8. lsusb list usb devices

    We all use USB devices, such as external hard drives and keyboards. Run the NA command for displaying information about USB buses in the Linux system and the devices connected to them.
    lsusb
    # Want a graphical summary of USB devices connected to the system? #
    sudo usbview

    usbview
    usbview provides a graphical summary of USB devices connected to the system. Detailed information may be displayed by selecting individual devices in the tree display
    lspci list PCI devices

    We use the lspci command for displaying information about PCI buses in the system and devices connected to them:
    lspci
    lspci

    9. timedatectl view current date and time zone

    Typically we use the date command to set or get date/time information on the CLI:
    date
    However, modern Linux distro use the timedatectl command to query and change the system clock and its settings, and enable or disable time synchronization services (NTPD and co):
    timedatectl

                   Local time: Sun 2020-07-26 16:31:10 IST
               Universal time: Sun 2020-07-26 11:01:10 UTC
                     RTC time: Sun 2020-07-26 11:01:10    
                    Time zone: Asia/Kolkata (IST, +0530)  
    System clock synchronized: yes                        
                  NTP service: active                     
              RTC in local TZ: no
    
    10. w who is logged in

    Run the w command on Linux to see information about the Linux users currently on the machine, and their processes:

    $ w

    Conclusion

    And this concluded our ten Linux commands to know the system to increase your productivity quickly to solve problems. Let me know about your favorite tool in the comment section below.

    [Jul 20, 2020] Direnv - Manage Project-Specific Environment Variables in Linux by Aaron Kili

    What is the value of this utility in comparison with "environment variables" package. Is this reinvention of the wheel?
    It allow "perdirectory" ,envrc file which contain specific for this directory env viruables. So it is simply load them and we can do this in bash without instlling this new utilitiy with uncler vale.
    Did the author knew about existence of "environment modules" when he wrote it ?
    Jul 10, 2020 | www.tecmint.com

    direnv is a nifty open-source extension for your shell on a UNIX operating system such as Linux and macOS. It is compiled into a single static executable and supports shells such as bash , zsh , tcsh , and fish .

    The main purpose of direnv is to allow for project-specific environment variables without cluttering ~/.profile or related shell startup files. It implements a new way to load and unload environment variables depending on the current directory.

    It is used to load 12factor apps (a methodology for building software-as-a-service apps) environment variables, create per-project isolated development environments, and also load secrets for deployment. Additionally, it can be used to build multi-version installation and management solutions similar to rbenv , pyenv , and phpenv .

    So How Does direnv Works?

    Before the shell loads a command prompt, direnv checks for the existence of a .envrc file in the current (which you can display using the pwd command ) and parent directory. The checking process is swift and can't be noticed on each prompt.

    Once it finds the .envrc file with the appropriate permissions, it loads it into a bash sub-shell and it captures all exported variables and makes them available to the current shell.

    ... ... ...

    How to Use direnv in Linux Shell

    To demonstrate how direnv works, we will create a new directory called tecmint_projects and move into it.

    $ mkdir ~/tecmint_projects
    $ cd tecmint_projects/

    Next, let's create a new variable called TEST_VARIABLE on the command line and when it is echoed , the value should be empty:

    $ echo $TEST_VARIABLE

    Now we will create a new .envrc file that contains Bash code that will be loaded by direnv . We also try to add the line " export the TEST_VARIABLE=tecmint " in it using the echo command and the output redirection character (>) :

    $ echo export TEST_VARIABLE=tecmint > .envrc

    By default, the security mechanism blocks the loading of the .envrc file. Since we know it a secure file, we need to approve its content by running the following command:

    $ direnv allow .

    Now that the content of .envrc file has been allowed to load, let's check the value of TEST_VARIABLE that we set before:

    $ echo $TEST_VARIABLE

    When we exit the tecmint_project directory, the direnv will be unloaded and if we check the value of TEST_VARIABLE once more, it should be empty:

    $ cd ..
    $ echo $TEST_VARIABLE
    
    Demonstration of How direnv Works in Linux
    Demonstration of How direnv Works in Linux

    Every time you move into the tecmint_projects directory, the .envrc file will be loaded as shown in the following screenshot:

    $ cd tecmint_projects/
    Loading envrc File in a Directory
    Loading envrc File in a Directory

    To revoke the authorization of a given .envrc , use the deny command.

    $ direnv deny .			#in current directory
    OR
    $ direnv deny /path/to/.envrc

    For more information and usage instructions, see the direnv man page:

    $ man direnv

    Additionally, direnv also uses a stdlib ( direnv-stdlib ) comes with several functions that allow you to easily add new directories to your PATH and do so much more.

    [Jul 14, 2020] Important Linux -proc filesystem files you need to know - Enable Sysadmin

    Jul 14, 2020 | www.redhat.com

    The /proc files I find most valuable, especially for inherited system discovery, are:

    And the most valuable of those are cpuinfo and meminfo .

    Again, I'm not stating that other files don't have value, but these are the ones I've found that have the most value to me. For example, the /proc/uptime file gives you the system's uptime in seconds. For me, that's not particularly valuable. However, if I want that information, I use the uptime command that also gives me a more readable version of /proc/loadavg as well.

    By comparison:

    $ cat /proc/uptime
    46901.13 46856.69
    
    $ cat /proc/loadavg 
    0.00 0.01 0.03 2/111 2039
    
    $ uptime
     00:56:13 up 13:01,  2 users,  load average: 0.00, 0.01, 0.03
    

    I think you get the idea.

    /proc/cmdline

    This file shows the parameters passed to the kernel at the time it is started.

    $ cat /proc/cmdline
    
    BOOT_IMAGE=/vmlinuz-3.10.0-1062.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
    

    The value of this information is in how the kernel was booted because any switches or special parameters will be listed here, too. And like all information under /proc , it can be found elsewhere and usually with better formatting, but /proc files are very handy when you can't remember the command or don't want to grep for something.

    /proc/cpuinfo

    The /proc/cpuinfo file is the first file I check when connecting to a new system. I want to know the CPU make-up of a system and this file tells me everything I need to know.

    $ cat /proc/cpuinfo 
    
    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 142
    model name      : Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
    stepping        : 9
    cpu MHz         : 2303.998
    cache size      : 4096 KB
    physical id     : 0
    siblings        : 1
    core id         : 0
    cpu cores       : 1
    apicid          : 0
    initial apicid  : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 22
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase avx2 invpcid rdseed clflushopt md_clear flush_l1d
    bogomips        : 4607.99
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 39 bits physical, 48 bits virtual
    power management:
    

    This is a virtual machine and only has one vCPU. If your system contains more than one CPU, the CPU numbering begins at 0 for the first CPU.

    /proc/meminfo

    The /proc/meminfo file is the second file I check on a new system. It gives me a general and a specific look at a system's memory allocation and usage.

    $ cat /proc/meminfo 
    MemTotal:        1014824 kB
    MemFree:          643608 kB
    MemAvailable:     706648 kB
    Buffers:            1072 kB
    Cached:           185568 kB
    SwapCached:            0 kB
    Active:           187568 kB
    Inactive:          80092 kB
    Active(anon):      81332 kB
    Inactive(anon):     6604 kB
    Active(file):     106236 kB
    Inactive(file):    73488 kB
    Unevictable:           0 kB
    Mlocked:               0 kB
    ***Output truncated***
    

    I think most sysadmins either use the free or the top command to pull some of the data contained here. The /proc/meminfo file gives me a quick memory overview that I like and can redirect to another file as a snapshot.

    /proc/version

    The /proc/version command provides more information than the related uname -a command does. Here are the two compared:

    $ cat /proc/version
    Linux version 3.10.0-1062.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019
    
    $ uname -a
    Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
    
    

    Usually, the uname -a command is sufficient to give you kernel version info but for those of you who are developers or who are ultra-concerned with details, the /proc/version file is there for you.

    Wrapping up

    The /proc filesystem has a ton of valuable information available to system administrators who want a convenient, non-command way of getting at raw system info. As I stated earlier, there are other ways to display the information in /proc . Additionally, some of the /proc info isn't what you'd want to use for system assessment. For example, use commands such as vmstat 5 5 or iostat 5 5 to get a better picture of system performance rather than reading one of the available /proc files.

    [Jul 14, 2020] Sysadmin tales- How to keep calm and not panic when things break by Glen Newell

    Jul 10, 2020 | www.redhat.com

    Sysadmin tales: How to keep calm and not panic when things break When an incident occurs, resist the urge to freak out. Instead, use these tips to help you keep your cool and find a solution.

    I was working on several projects simultaneously for a small company that had been carved out of a larger one that had gone out of business. The smaller company had inherited some of the bigger company's infrastructure, and all the headaches along with it. That day, I had some additional consultants working with me on a project to migrate email service from a large proprietary onsite cluster to a cloud provider, while at the same time, I was working on reconfiguring a massive storage array.

    At some point, I clicked the wrong button. All of a sudden, I started getting calls. The CIO and the consultants were standing in front of my desk. The email servers were completely offline -- they responded, but could not access the backing storage. I didn't know it yet, but I had deleted the storage pool for the active email servers.

    My vision blurred into a tunnel, and my stomach fell into a bottomless pit. I struggled to breathe. I did my best to maintain a poker face as the executives and consultants watched impatiently. I scanned logs and messages looking for clues. I ran tests on all the components to find the source of the issue and came up with nothing. The data seemed to be gone, and panic was setting in.

    I pushed back from the desk and excused myself to use the restroom. Closing and latching the door behind me, I contemplated my fate for a moment, then splashed cold water on my face and took a deep breath. Then it dawned on me: earlier, I had set up an active mirror of that storage pool. The data was all there; I just needed to reconnect it.

    I returned to my desk and couldn't help a bit of a smirk. A couple of commands, a couple of clicks, and a sip of coffee. About five minutes of testing, and I could say, "Sorry, guys. Should be good now." The whole thing had happened in about 30 minutes.

    We've all been there

    Everyone makes mistakes, even the most senior and venerable engineers and systems administrators. We're all human. It just so happens that, as a sysadmin, a small mistake in a moment can cause very visible problems, and, PANIC. This is normal, though. What separates the hero from the unemployed in that moment, can be just a few simple things.

    When an incident occurs, focusing on who's at fault can be tempting; blame is something we know how to do and can do something about, and it can even offer some relief if we can tell ourselves it's not our fault. But in fact, blame accomplishes nothing and can be counterproductive in a moment of crisis -- it can distract us from finding a solution to the problem, and create even more stress.

    Backups, backups, backups

    This is just one of the times when having a backup saved the day for me, and for a client. Every sysadmin I've ever worked with will tell you the same thing -- always have a backup. Do regular backups. Make backups of configurations you are working on. Make a habit of creating a backup as the first step in any project. There are some great articles here on Enable Sysadmin about the various things you can do to protect yourself.

    Another good practice is to never work on production systems until you have tested the change. This may not always be possible, but if it is, the extra effort and time will be well worth it for the rare occasions when you have an unexpected result, so you can avoid the panic of wondering where you might have saved your most recent resume. Having a plan and being prepared can go a long way to avoiding those very stressful situations.

    Breathe in, breathe out

    The panic response in humans is related to the "fight or flight" reflex, which served our ancestors so well. It's a really useful resource for avoiding saber tooth tigers (and angry CFOs), but not so much for understanding and solving complex technical problems. Understanding that it's normal but not really helpful, we can recognize it and find a way to overcome it in the moment.

    The simplest way we can tame the impulse to blackout and flee is to take a deep breath (or several). Studies have shown that simple breathing exercises and meditation can improve our general outlook and ability to focus on a specific task. There is also evidence that temperature changes can make a difference; something as simple as a splash of water on the face or an ice-cold beverage can calm a panic. These things work for me.

    Walk the path of troubleshooting, one step at a time

    Once we have convinced ourselves that the world is not going to end immediately, we can focus on solving the problem. Take the situation one element, one step at a time to find what went wrong, then take that and apply the solution(s) systematically. Again, it's important to focus on the problem and solution in front of you rather than worrying about things you can't do anything about right now or what might happen later. Remember, blame is not helpful, and that includes blaming yourself.

    Most often, when I focus on the problem, I find that I forget to panic, and I can do even better work on the solution. Many times, I have found solutions I wouldn't have seen or thought of otherwise in this state.

    Take five

    Another thing that's easy to forget is that, when you've been working on a problem, it's important to give yourself a break. Drink some water. Take a short walk. Rest your brain for a couple of minutes. Hunger, thirst, and fatigue can lead to less clear thinking and, you guessed it, panic.

    Time to face the music

    My last piece of advice -- though certainly not the least important -- is, if you are responsible for an incident, be honest about what happened. This will benefit you for both the short and long term.

    During the early years of the space program, the directors and engineers at NASA established a routine of getting together and going over what went wrong and what and how to improve for the next time. The same thing happens in the military, emergency management, and healthcare fields. It's also considered good agile/DevOps practice. Some of the smartest, highest-strung engineers, administrators, and managers I've known and worked with -- people with millions of dollars and thousands of lives in their area of responsibility -- have insisted on the importance of learning lessons from mistakes and incidents. It's a mark of a true professional to own up to mistakes and work to improve.

    It's hard to lose face, but not only will your colleagues appreciate you taking responsibility and working to improve the team, but I promise you will rest better and be able to manage the next problem better if you look at these situations as learning opportunities.

    Accidents and mistakes can't ever be avoided entirely, but hopefully, you will find some of this advice useful the next time you face an unexpected challenge.

    [ Want to test your sysadmin skills? Take a skills assessment today . ]

    [Jul 14, 2020] Linux stories- When backups saved the day - Enable Sysadmin

    Jul 14, 2020 | www.redhat.com

    I set up a backup approach that software vendors refer to as instant restore, shadow restore, preemptive restore, or similar term. We ran incremental backup jobs every hour and restored the backups in the background to a new virtual machine. Each full hour, we had a system ready that was four hours back in time and just needed to be finished. So if I choose to restore the incremental from one hour ago, it would take less time than a complete system restore because only the small increments had to be restored to the almost-ready virtual machine.

    And the effort paid off

    One day, I was on vacation, having a barbecue and some beer, when I got a call from my colleague telling me that the terminal server with the ERP application was broken due to a failed update and the guy who ran the update forgot to take a snapshot first.

    The only thing I needed to tell my colleague was to shut down the broken machine, find the UI of our backup/restore system, and then identify the restore job. Finally, I told him how to choose the timestamp from the last four hours when the restore should finish. The restore finished 30 minutes later, and the system was ready to be used again. We were back in action after a total of 30 minutes, and only the work from the last two hours or so was lost! Awesome! Now, back to vacation.

    [Jul 12, 2020] Testing your Bash script by David Both

    Dec 21, 2019 | opensource.com

    Get the highlights in your inbox every week.

    In the first article in this series, you created your first, very small, one-line Bash script and explored the reasons for creating shell scripts. In the second article , you began creating a fairly simple template that can be a starting point for other Bash programs and began testing it. In the third article , you created and used a simple Help function and learned about using functions and how to handle command-line options such as -h .

    This fourth and final article in the series gets into variables and initializing them as well as how to do a bit of sanity testing to help ensure the program runs under the proper conditions. Remember, the objective of this series is to build working code that will be used for a template for future Bash programming projects. The idea is to make getting started on new programming projects easy by having common elements already available in the template.

    Variables

    The Bash shell, like all programming languages, can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable. If you are not familiar with using variables, read my article How to program with Bash: Syntax and tools before you go further.

    Done? Great! Let's now look at some good practices when using variables.

    More on Bash I always set initial values for every variable used in my scripts. You can find this in your template script immediately after the procedures as the first part of the main program body, before it processes the options. Initializing each variable with an appropriate value can prevent errors that might occur with uninitialized variables in comparison or math operations. Placing this list of variables in one place allows you to see all of the variables that are supposed to be in the script and their initial values.

    Your little script has only a single variable, $option , so far. Set it by inserting the following lines as shown:


    # Main program #

    # Initialize variables
    option = ""

    # Process the input options. Add options as needed. #

    Test this to ensure that everything works as it should and that nothing has broken as the result of this change.

    Constants

    Constants are variables, too -- at least they should be. Use variables wherever possible in command-line interface (CLI) programs instead of hard-coded values. Even if you think you will use a particular value (such as a directory name, a file name, or a text string) just once, create a variable and use it where you would have placed the hard-coded name.

    For example, the message printed as part of the main body of the program is a string literal, echo "Hello world!" . Change that to a variable. First, add the following statement to the variable initialization section:

    Msg="Hello world!"
    

    And now change the last line of the program from:

    echo "Hello world!"
    

    to:

    echo "$Msg"
    

    Test the results.

    Sanity checks

    Sanity checks are simply tests for conditions that need to be true in order for the program to work correctly, such as: the program must be run as the root user, or it must run on a particular distribution and release of that distro. Add a check for root as the running user in your simple program template.

    Testing that the root user is running the program is easy because a program runs as the user that launches it.

    The id command can be used to determine the numeric user ID (UID) the program is running under. It provides several bits of information when it is used without any options:

    [ student @ testvm1 ~ ] $ id
    uid = 1001 ( student ) gid = 1001 ( student ) groups = 1001 ( student ) , 5000 ( dev )

    Using the -u option returns just the user's UID, which is easily usable in your Bash program:

    [ student @ testvm1 ~ ] $ id -u
    1001
    [ student @ testvm1 ~ ] $

    Add the following function to the program. I added it after the Help procedure, but you can place it anywhere in the procedures section. The logic is that if the UID is not zero, which is always the root user's UID, the program exits:

    ################################################################################
    # Check for root. #
    ################################################################################
    CheckRoot ()
    {
    if [ ` id -u ` ! = 0 ]
    then
    echo "ERROR: You must be root user to run this program"
    exit
    fi
    }

    Now, add a call to the CheckRoot procedure just before the variable's initialization. Test this, first running the program as the student user:

    [ student @ testvm1 ~ ] $ . / hello
    ERROR: You must be root user to run this program
    [ student @ testvm1 ~ ] $

    then as the root user:

    [ root @ testvm1 student ] # ./hello
    Hello world !
    [ root @ testvm1 student ] #

    You may not always need this particular sanity test, so comment out the call to CheckRoot but leave all the code in place in the template. This way, all you need to do to use that code in a future program is to uncomment the call.

    The code

    After making the changes outlined above, your code should look like this:

    #!/usr/bin/bash
    ################################################################################
    # scriptTemplate #
    # #
    # Use this template as the beginning of a new program. Place a short #
    # description of the script here. #
    # #
    # Change History #
    # 11/11/2019 David Both Original code. This is a template for creating #
    # new Bash shell scripts. #
    # Add new history entries as needed. #
    # #
    # #
    ################################################################################
    ################################################################################
    ################################################################################
    # #
    # Copyright (C) 2007, 2019 David Both #
    # [email protected] #
    # #
    # This program is free software; you can redistribute it and/or modify #
    # it under the terms of the GNU General Public License as published by #
    # the Free Software Foundation; either version 2 of the License, or #
    # (at your option) any later version. #
    # #
    # This program is distributed in the hope that it will be useful, #
    # but WITHOUT ANY WARRANTY; without even the implied warranty of #
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
    # GNU General Public License for more details. #
    # #
    # You should have received a copy of the GNU General Public License #
    # along with this program; if not, write to the Free Software #
    # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
    # #
    ################################################################################
    ################################################################################
    ################################################################################

    ################################################################################
    # Help #
    ################################################################################
    Help ()
    {
    # Display Help
    echo "Add description of the script functions here."
    echo
    echo "Syntax: scriptTemplate [-g|h|v|V]"
    echo "options:"
    echo "g Print the GPL license notification."
    echo "h Print this Help."
    echo "v Verbose mode."
    echo "V Print software version and exit."
    echo
    }

    ################################################################################
    # Check for root. #
    ################################################################################
    CheckRoot ()
    {
    # If we are not running as root we exit the program
    if [ ` id -u ` ! = 0 ]
    then
    echo "ERROR: You must be root user to run this program"
    exit
    fi
    }

    ################################################################################
    ################################################################################
    # Main program #
    ################################################################################
    ################################################################################

    ################################################################################
    # Sanity checks #
    ################################################################################
    # Are we rnning as root?
    # CheckRoot

    # Initialize variables
    option = ""
    Msg = "Hello world!"
    ################################################################################
    # Process the input options. Add options as needed. #
    ################################################################################
    # Get the options
    while getopts ":h" option; do
    case $option in
    h ) # display Help
    Help
    exit ;;
    \? ) # incorrect option
    echo "Error: Invalid option"
    exit ;;
    esac
    done

    echo " $Msg " A final exercise

    You probably noticed that the Help function in your code refers to features that are not in the code. As a final exercise, figure out how to add those functions to the code template you created.

    Summary

    In this article, you created a couple of functions to perform a sanity test for whether your program is running as root. Your program is getting a little more complex, so testing is becoming more important and requires more test paths to be complete.

    This series looked at a very minimal Bash program and how to build a script up a bit at a time. The result is a simple template that can be the starting point for other, more useful Bash scripts and that contains useful elements that make it easy to start new scripts.

    By now, you get the idea: Compiled programs are necessary and fill a very important need. But for sysadmins, there is always a better way. Always use shell scripts to meet your job's automation needs. Shell scripts are open; their content and purpose are knowable. They can be readily modified to meet different requirements. I have never found anything that I need to do in my sysadmin role that cannot be accomplished with a shell script.

    What you have created so far in this series is just the beginning. As you write more Bash programs, you will find more bits of code that you use frequently and should be included in your program template.

    Resources

    [Jul 12, 2020] Creating a Bash script template by David Both

    Dec 19, 2019 | opensource.com

    In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs.

    In this second article, you will begin creating a Bash script template that can be used as a starting point for other Bash scripts. The template will ultimately contain a Help facility, a licensing statement, a number of simple functions, and some logic to deal with those options and others that might be needed for the scripts that will be based on this template.

    Why create a template? More on sysadmins Like automation in general, the idea behind creating a template is to be the " lazy sysadmin ." A template contains the basic components that you want in all of your scripts. It saves time compared to adding those components to every new script and makes it easy to start a new script.

    Although it can be tempting to just throw a few command-line Bash statements together into a file and make it executable, that can be counterproductive in the long run. A well-written and well-commented Bash program with a Help facility and the capability to accept command-line options provides a good starting point for sysadmins who maintain the program, which includes the programs that you write and maintain.

    The requirements

    You should always create a set of requirements for every project you do. This includes scripts, even if it is a simple list with only two or three items on it. I have been involved in many projects that either failed completely or failed to meet the customer's needs, usually due to the lack of a requirements statement or a poorly written one.

    The requirements for this Bash template are pretty simple:

    1. Create a template that can be used as the starting point for future Bash programming projects.
    2. The template should follow standard Bash programming practices.
    3. It must include:
      • A heading section that can be used to describe the function of the program and a changelog
      • A licensing statement
      • A section for functions
      • A Help function
      • A function to test whether the program user is root
      • A method for evaluating command-line options
    The basic structure

    A basic Bash script has three sections. Bash has no way to delineate sections, but the boundaries between the sections are implicit.

    That is all there is -- just three sections in the structure of any Bash program.

    Leading comments

    I always add more than this for various reasons. First, I add a couple of sections of comments immediately after the shebang. These comment sections are optional, but I find them very helpful.

    The first comment section is the program name and description and a change history. I learned this format while working at IBM, and it provides a method of documenting the long-term development of the program and any fixes applied to it. This is an important start in documenting your program.

    The second comment section is a copyright and license statement. I use GPLv2, and this seems to be a standard statement for programs licensed under GPLv2. If you use a different open source license, that is fine, but I suggest adding an explicit statement to the code to eliminate any possible confusion about licensing. Scott Peterson's article The source code is the license helps explain the reasoning behind this.

    So now the script looks like this:

    #!/bin/bash
    ################################################################################
    # scriptTemplate #
    # #
    # Use this template as the beginning of a new program. Place a short #
    # description of the script here. #
    # #
    # Change History #
    # 11/11/2019 David Both Original code. This is a template for creating #
    # new Bash shell scripts. #
    # Add new history entries as needed. #
    # #
    # #
    ################################################################################
    ################################################################################
    ################################################################################
    # #
    # Copyright (C) 2007, 2019 David Both #
    # [email protected] #
    # #
    # This program is free software; you can redistribute it and/or modify #
    # it under the terms of the GNU General Public License as published by #
    # the Free Software Foundation; either version 2 of the License, or #
    # (at your option) any later version. #
    # #
    # This program is distributed in the hope that it will be useful, #
    # but WITHOUT ANY WARRANTY; without even the implied warranty of #
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
    # GNU General Public License for more details. #
    # #
    # You should have received a copy of the GNU General Public License #
    # along with this program; if not, write to the Free Software #
    # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
    # #
    ################################################################################
    ################################################################################
    ################################################################################

    echo "hello world!"

    Run the revised program to verify that it still works as expected.

    About testing

    Now is a good time to talk about testing.

    " There is always one more bug."
    -- Lubarsky's Law of Cybernetic Entomology

    Lubarsky -- whoever that might be -- is correct. You can never find all the bugs in your code. For every bug I find, there always seems to be another that crops up, usually at a very inopportune time.

    Testing is not just about programs. It is also about verification that problems -- whether caused by hardware, software, or the seemingly endless ways users can find to break things -- that are supposed to be resolved actually are. Just as important, testing is also about ensuring that the code is easy to use and the interface makes sense to the user.

    Following a well-defined process when writing and testing shell scripts can contribute to consistent and high-quality results. My process is simple:

    1. Create a simple test plan.
    2. Start testing right at the beginning of development.
    3. Perform a final test when the code is complete.
    4. Move to production and test more.
    The test plan

    There are lots of different formats for test plans. I have worked with the full range -- from having it all in my head; to a few notes jotted down on a sheet of paper; and all the way to a complex set of forms that require a full description of each test, which functional code it would test, what the test would accomplish, and what the inputs and results should be.

    Speaking as a sysadmin who has been (but is not now) a tester, I try to take the middle ground. Having at least a short written test plan will ensure consistency from one test run to the next. How much detail you need depends upon how formal your development and test functions are.

    The sample test plan documents I found using Google were complex and intended for large organizations with very formal development and test processes. Although those test plans would be good for people with "test" in their job title, they do not apply well to sysadmins' more chaotic and time-dependent working conditions. As in most other aspects of the job, sysadmins need to be creative. So here is a short list of things to consider including in your test plan. Modify it to suit your needs:

    This list should give you some ideas for creating your test plans. Most sysadmins should keep it simple and fairly informal.

    Test early -- test often

    I always start testing my shell scripts as soon as I complete the first portion that is executable. This is true whether I am writing a short command-line program or a script that is an executable file.

    I usually start creating new programs with the shell script template. I write the code for the Help function and test it. This is usually a trivial part of the process, but it helps me get started and ensures that things in the template are working properly at the outset. At this point, it is easy to fix problems with the template portions of the script or to modify it to meet needs that the standard template does not.

    Once the template and Help function are working, I move on to creating the body of the program by adding comments to document the programming steps required to meet the program specifications. Now I start adding code to meet the requirements stated in each comment. This code will probably require adding variables that are initialized in that section of the template -- which is now becoming a shell script.

    This is where testing is more than just entering data and verifying the results. It takes a bit of extra work. Sometimes I add a command that simply prints the intermediate result of the code I just wrote and verify that. For more complex scripts, I add a -t option for "test mode." In this case, the internal test code executes only when the -t option is entered on the command line.

    Final testing

    After the code is complete, I go back to do a complete test of all the features and functions using known inputs to produce specific outputs. I also test some random inputs to see if the program can handle unexpected input.

    Final testing is intended to verify that the program is functioning essentially as intended. A large part of the final test is to ensure that functions that worked earlier in the development cycle have not been broken by code that was added or changed later in the cycle.

    If you have been testing the script as you add new code to it, you may think there should not be any surprises during the final test. Wrong! There are always surprises during final testing. Always. Expect those surprises, and be ready to spend time fixing them. If there were never any bugs discovered during final testing, there would be no point in doing a final test, would there?

    Testing in production

    Huh -- what?

    "Not until a program has been in production for at least six months will the most harmful error be discovered."
    -- Troutman's Programming Postulates

    Yes, testing in production is now considered normal and desirable. Having been a tester myself, this seems reasonable. "But wait! That's dangerous," you say. My experience is that it is no more dangerous than extensive and rigorous testing in a dedicated test environment. In some cases, there is no choice because there is no test environment -- only production.

    Sysadmins are no strangers to the need to test new or revised scripts in production. Anytime a script is moved into production, that becomes the ultimate test. The production environment constitutes the most critical part of that test. Nothing that testers can dream up in a test environment can fully replicate the true production environment.

    The allegedly new practice of testing in production is just the recognition of what sysadmins have known all along. The best test is production -- so long as it is not the only test.

    Fuzzy testing

    This is another of those buzzwords that initially caused me to roll my eyes. Its essential meaning is simple: have someone bang on the keys until something happens, and see how well the program handles it. But there really is more to it than that.

    Fuzzy testing is a bit like the time my son broke the code for a game in less than a minute with random input. That pretty much ended my attempts to write games for him.

    Most test plans utilize very specific input that generates a specific result or output. Regardless of whether the test defines a positive or negative outcome as a success, it is still controlled, and the inputs and results are specified and expected, such as a specific error message for a specific failure mode.

    Fuzzy testing is about dealing with randomness in all aspects of the test, such as starting conditions, very random and unexpected input, random combinations of options selected, low memory, high levels of CPU contending with other programs, multiple instances of the program under test, and any other random conditions that you can think of to apply to the tests.

    I try to do some fuzzy testing from the beginning. If the Bash script cannot deal with significant randomness in its very early stages, then it is unlikely to get better as you add more code. This is a good time to catch these problems and fix them while the code is relatively simple. A bit of fuzzy testing at each stage is also useful in locating problems before they get masked by even more code.

    After the code is completed, I like to do some more extensive fuzzy testing. Always do some fuzzy testing. I have certainly been surprised by some of the results. It is easy to test for the expected things, but users do not usually do the expected things with a script.

    Previews of coming attractions

    This article accomplished a little in the way of creating a template, but it mostly talked about testing. This is because testing is a critical part of creating any kind of program. In the next article in this series, you will add a basic Help function along with some code to detect and act on options, such as -h , to your Bash script template.

    Resources

    This series of articles is partially based on Volume 2, Chapter 10 of David Both's three-part Linux self-study course, Using and Administering Linux -- Zero to SysAdmin .

    [Jul 12, 2020] Testing Bash with BATS by Darin London

    Feb 21, 2019 | opensource.com

    3 comments

    Software developers writing applications in languages such as Java, Ruby, and Python have sophisticated libraries to help them maintain their software's integrity over time. They create tests that run applications through a series of executions in structured environments to ensure all of their software's aspects work as expected.

    These tests are even more powerful when they're automated in a continuous integration (CI) system, where every push to the source repository causes the tests to run, and developers are immediately notified when tests fail. This fast feedback increases developers' confidence in the functional integrity of their applications.

    The Bash Automated Testing System ( BATS ) enables developers writing Bash scripts and libraries to apply the same practices used by Java, Ruby, Python, and other developers to their Bash code.

    Installing BATS

    The BATS GitHub page includes installation instructions. There are two BATS helper libraries that provide more powerful assertions or allow overrides to the Test Anything Protocol ( TAP ) output format used by BATS. These can be installed in a standard location and sourced by all scripts. It may be more convenient to include a complete version of BATS and its helper libraries in the Git repository for each set of scripts or libraries being tested. This can be accomplished using the git submodule system.

    The following commands will install BATS and its helper libraries into the test directory in a Git repository.

    git submodule init
    git submodule add https: // github.com / sstephenson / bats test / libs / bats
    git submodule add https: // github.com / ztombol / bats-assert test / libs / bats-assert
    git submodule add https: // github.com / ztombol / bats-support test / libs / bats-support
    git add .
    git commit -m 'installed bats'

    To clone a Git repository and install its submodules at the same time, use the
    --recurse-submodules flag to git clone .

    Each BATS test script must be executed by the bats executable. If you installed BATS into your source code repo's test/libs directory, you can invoke the test with:

    ./test/libs/bats/bin/bats <path to test script>
    

    Alternatively, add the following to the beginning of each of your BATS test scripts:

    #!/usr/bin/env ./test/libs/bats/bin/bats
    load 'libs/bats-support/load'
    load 'libs/bats-assert/load'

    and chmod +x <path to test script> . This will a) make them executable with the BATS installed in ./test/libs/bats and b) include these helper libraries. BATS test scripts are typically stored in the test directory and named for the script being tested, but with the .bats extension. For example, a BATS script that tests bin/build should be called test/build.bats .

    You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., ./test/lib/bats/bin/bats test/*.bats .

    Organizing libraries and scripts for BATS coverage

    Bash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing.

    For example, build.sh is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions.

    It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can source the library in your BATS test and run the functions as you would any other command to test them.

    Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like run_main . Then, add the following to the end of the script:

    if [[ " ${BASH_SOURCE[0]} " == " ${0} " ]]
    then
    run_main
    fi

    This bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with source . This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is build.sh refactored for better BATS testability .

    Writing and running tests

    As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive @test blocks that describe the unit of the application being tested. Each @test block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the bats , bats-assert , and bats-support libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block:

    @ test "requires CI_COMMIT_REF_SLUG environment variable" {
    unset CI_COMMIT_REF_SLUG
    assert_empty " ${CI_COMMIT_REF_SLUG} "
    run some_command
    assert_failure
    assert_output --partial "CI_COMMIT_REF_SLUG"
    }

    If a BATS script includes setup and/or teardown functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. Build.bats is a full BATS test of our newly formatted build.sh script. (The mock_docker command in this test will be explained below, in the section on mocking/stubbing.)

    When the test script runs, BATS uses exec to run each @test block as a separate subprocess. This makes it possible to export environment variables and even functions in one @test without affecting other @test s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the CI_COMMIT_REF_SLUG test block when it fails:

    ✗ requires CI_COMMIT_REF_SLUG environment variable
    ( from function ` assert_output ' in file test/libs/bats-assert/src/assert.bash, line 231,
    in test file test/ci_deploy.bats, line 26)
    `assert_output --partial "CI_COMMIT_REF_SLUG"' failed

    -- output does not contain substring --
    substring ( 1 lines ) :
    CI_COMMIT_REF_SLUG
    output ( 3 lines ) :
    . / bin / deploy.sh: join_string_by: command not found
    oc error
    Could not login
    --

    ** Did not delete , as test failed **

    1 test , 1 failure

    Here is the output of a successful test:

    ✓ requires CI_COMMIT_REF_SLUG environment variable
    
    Helpers

    Like any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as bats-assert and bats-support , can even be tested with BATS.

    Libraries can be placed in the same test directory as the BATS scripts or in the test/libs directory if the number of files in the test directory gets unwieldy. BATS provides the load function that takes a path to a Bash file relative to the script being tested (e.g., test , in our case) and sources that file. Files must end with the prefix .bash , but the path to the file passed to the load function can't include the prefix. build.bats loads the bats-assert and bats-support libraries, a small helpers.bash library, and a docker_mock.bash library (described below) with the following code placed at the beginning of the test script below the interpreter magic line:

    load 'libs/bats-support/load'
    load 'libs/bats-assert/load'
    load 'helpers'
    load 'docker_mock' Stubbing test input and mocking external calls

    The majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( stdout , stderr ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great discussion of mocking and stubbing in Ruby RSpec, which applies to any testing system.

    The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash export command with the -f flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the cat executable:

    function cat () { echo "THIS WOULD CAT ${*} " }
    export -f cat

    This method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from build.bats that mocks the raise function described in build.sh to ensure a specific error message is raised by the login fuction:

    @ test ".login raises on oc error" {
    source ${profile_script}
    function raise () { echo " ${1} raised" ; }
    export -f raise
    run login
    assert_failure
    assert_output -p "Could not login raised"
    }

    Normally, it is not necessary to unset a stub/mock function after the test, since export only affects the current subprocess during the exec of the current @test block. However, it is possible to mock/stub commands (e.g. cat , sed , etc.) that the BATS assert * functions use internally. These mock/stub functions must be unset before these assert commands are run, or they will not work properly. Here is an example from build.bats that mocks sed , runs the build_deployable function, and unsets sed before running any assertions:

    @ test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
    local expected_dockerfile = 'Dockerfile.production'
    local application = 'application'
    local environment = 'environment'
    local expected_original_base_image = " ${application} "
    local expected_candidate_image = " ${application} -candidate: ${environment} "
    local expected_deployable_image = " ${application} : ${environment} "
    source ${profile_script}
    mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t " ${expected_deployable_image} " -
    function publish_image () { echo "publish_image ${*} " ; }
    export -f publish_image
    function sed () {
    echo "sed ${*} " >& 2 ;
    echo "FROM application-candidate:environment" ;
    }
    export -f sed
    run build_deployable " ${application} " " ${environment} "
    assert_success
    unset sed
    assert_output --regexp "sed.* ${expected_dockerfile} "
    assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image} "
    assert_output -p "FROM ${expected_candidate_image} piped"
    assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
    assert_output -p "publish_image ${expected_deployable_image} "
    }

    Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions:

    Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries.

    A good example is docker_mock.bash . It is loaded into build.bats and used in any test block that tests a function that calls the Docker executable. A typical test block using docker_mock looks like:

    @ test ".publish_image fails if docker push fails" {
    setup_publish
    local expected_image = "image"
    local expected_publishable_image = " ${CI_REGISTRY_IMAGE} / ${expected_image} "
    source ${profile_script}
    mock_docker tag " ${expected_image} " " ${expected_publishable_image} "
    mock_docker push " ${expected_publishable_image} " and_fail
    run publish_image " ${expected_image} "
    assert_failure
    assert_output -p "tagging ${expected_image} as ${expected_publishable_image} "
    assert_output -p "tag ${expected_image} ${expected_publishable_image} "
    assert_output -p "pushing image to gitlab registry"
    assert_output -p "push ${expected_publishable_image} "
    }

    This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker.

    One aspect of BATS introduced by mock_docker.bash is the ${BATS_TMPDIR} environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The mock_docker.bash library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory.

    One note of caution regarding mocking/stubbing: The build.bats test consciously violates a dictum of testing that states: Don't mock what you don't own! This dictum demands that calls to commands that the test's developer didn't write, like docker , cat , sed , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands.

    This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested build.sh script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision.

    Conclusion

    Introducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries.

    In general, scripts and libraries that meet one or more of the following should be tested with BATS:

    Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments.

    Acknowledgment: I am indebted to Darrin Mann for introducing me to BATS testing.

    [Jul 12, 2020] 6 handy Bash scripts for Git - Opensource.com

    Jul 12, 2020 | opensource.com

    6 handy Bash scripts for Git These six Bash scripts will make your life easier when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86 up 2 comments Image by : Opensource.com x Subscribe now

    Get the highlights in your inbox every week.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More on Git

    I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want. 1. gitlog

    gitlog prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^ , 2 for HEAD~2, and so forth. For example:

    $ gitlog
    -----------------------[ recovery25 ]-----------------------
    (snip)
    11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
    10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
    9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
    8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
    7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
    6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
    5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
    4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
    3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
    2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
    ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
    H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

    If I want to see what patches are on a different branch, I can specify an alternate branch:

    $ gitlog recovery24
    
    2. gitlog.id

    gitlog.id just prints the patch SHA1 IDs:

    $ gitlog.id
    -----------------------[ recovery25 ]-----------------------
    56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9

    Again, it assumes the current branch, but I can specify a different branch if I want.

    3. gitlog.id2

    gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:

    $ # create a new branch
    $ git branch --track origin/master
    $ # check out the new branch I just created
    $ git checkout recovery26
    $ # cherry-pick all patches from the old branch to the new one
    $ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep

    gitlog.grep greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function inode_go_sync , I simply do:

    $ gitlog.grep inode_go_sync
    -----------------------[ recovery25 - 50 patches ]-----------------------
    (snip)
    11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
    10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
    9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
    152:-static void inode_go_sync(struct gfs2_glock *gl)
    153:+static int inode_go_sync(struct gfs2_glock *gl)
    163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
    8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
    7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
    6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
    5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
    4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
    3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
    2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
    ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
    H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

    So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10 to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary adjustments.

    5. gitbranchcmp3

    gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool Kompare , which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints [SAME] . If there are only comment differences, it prints [same] (in lower case). For example:

    $ gitbranchcmp3 recovery24
    Branch recovery24 has 47 patches
    Branch recovery25 has 50 patches

    (snip)
    38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
    39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
    40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
    41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
    42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
    43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
    44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
    45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
    46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
    47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
    48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
    49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time

    Missing from recovery25:
    The missing:
    Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find

    Finally, I have gitlog.find , a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:

    $ gitlog.find
    -----------------------[ recovery25 - 50 patches ]-----------------------
    (snip)
    11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
    lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
    10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
    fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
    9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
    lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
    8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
    ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
    7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
    ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
    6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
    fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
    5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
    fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
    4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
    fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
    3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
    Not found upstream
    2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
    Not found upstream
    ^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
    Not found upstream
    H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
    Not found upstream
    Compare script generated: /tmp/compare_upstream.sh

    The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:

    Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The gitlog.find script is designed to locate GFS2 and DLM patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.

    Source code

    Here is the source for these scripts.

    1. gitlog #!/bin/bash
    branch = $1

    if test "x $branch " = x; then
    branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    fi

    patches = 0
    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
    for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done

    if [[ $branch =~ . * for-next. * ]]
    then
    start =HEAD
    # start=origin/for-next
    else
    start =origin / master
    fi

    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    / usr / bin / echo "-----------------------[" $branch "]-----------------------"
    patches =$ ( echo $patches - 1 | bc ) ;
    for i in $LIST ; do
    if [ $patches -eq 1 ] ; then
    cnt = " ^"
    elif [ $patches -eq 0 ] ; then
    cnt = " H"
    else
    if [ $patches -lt 10 ] ; then
    cnt = " $patches "
    else
    cnt = " $patches "
    fi
    fi
    / usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s %n" $i
    patches =$ ( echo $patches - 1 | bc )
    done
    #git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
    #git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch 2. gitlog.id #!/bin/bash
    branch = $1

    if test "x $branch " = x; then
    branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    fi

    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    / usr / bin / echo "-----------------------[" $branch "]-----------------------"
    git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 3. gitlog.id2 #!/bin/bash
    branch = $1

    if test "x $branch " = x; then
    branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    fi

    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
    git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 4. gitlog.grep #!/bin/bash
    param1 = $1
    param2 = $2

    if test "x $param2 " = x; then
    branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    string = $param1
    else
    branch = $param1
    string = $param2
    fi

    patches = 0
    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
    for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
    / usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
    patches =$ ( echo $patches - 1 | bc ) ;
    for i in $LIST ; do
    if [ $patches -eq 1 ] ; then
    cnt = " ^"
    elif [ $patches -eq 0 ] ; then
    cnt = " H"
    else
    if [ $patches -lt 10 ] ; then
    cnt = " $patches "
    else
    cnt = " $patches "
    fi
    fi
    / usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
    / usr / bin / git show --pretty =email --patch-with-stat $i | grep -n " $string "
    patches =$ ( echo $patches - 1 | bc )
    done 5. gitbranchcmp3 #!/bin/bash
    #
    # gitbranchcmp3 <old branch> [<new_branch>]
    #
    oldbranch = $1
    newbranch = $2
    script = / tmp / compare_mismatches.sh

    / usr / bin / rm -f $script
    echo "#!/bin/bash" > $script
    / usr / bin / chmod 755 $script
    echo "# Generated by gitbranchcmp3.sh" >> $script
    echo "# Run this script to compare the mismatched patches" >> $script
    echo " " >> $script
    echo "function compare_them()" >> $script
    echo "{" >> $script
    echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
    echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
    echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
    echo "}" >> $script
    echo " " >> $script

    if test "x $newbranch " = x; then
    newbranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    fi

    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    declare -a oldsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $oldbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )
    declare -a newsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $newbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )

    #echo "old: " $oldsha1s
    oldcount = ${#oldsha1s[@]}
    echo "Branch $oldbranch has $oldcount patches"
    oldcount =$ ( echo $oldcount - 1 | bc )
    #for o in `seq 0 ${#oldsha1s[@]}`; do
    # echo -n ${oldsha1s[$o]} " "
    # desc=`git show $i | head -5 | tail -1|cut -b5-`
    #done

    #echo "new: " $newsha1s
    newcount = ${#newsha1s[@]}
    echo "Branch $newbranch has $newcount patches"
    newcount =$ ( echo $newcount - 1 | bc )
    #for o in `seq 0 ${#newsha1s[@]}`; do
    # echo -n ${newsha1s[$o]} " "
    # desc=`git show $i | head -5 | tail -1|cut -b5-`
    #done
    echo

    for new in ` seq 0 $newcount ` ; do
    newsha = ${newsha1s[$new]}
    newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
    oldsha = " "
    same = "[ ]"
    for old in ` seq 0 $oldcount ` ; do
    if test " ${oldsha1s[$old]} " = "match" ; then
    continue ;
    fi
    olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
    if test " $olddesc " = " $newdesc " ; then
    oldsha = ${oldsha1s[$old]}
    #echo $oldsha
    git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
    git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
    diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
    if [ $? -eq 0 ] ; then
    # No differences
    same = "[SAME]"
    oldsha1s [ $old ] = "match"
    break
    fi
    git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
    git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
    diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
    if [ $? -eq 0 ] ; then
    # Differences in comments only
    same = "[same]"
    oldsha1s [ $old ] = "match"
    break
    fi
    oldsha1s [ $old ] = "match"
    echo "compare_them $oldsha $newsha " >> $script
    fi
    done
    echo " $new $oldsha $newsha $same $newdesc "
    done

    echo
    echo "Missing from $newbranch :"
    the_missing = ""
    # Now run through the olds we haven't matched up
    for old in ` seq 0 $oldcount ` ; do
    if test ${oldsha1s[$old]} ! = "match" ; then
    olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
    echo " ${oldsha1s[$old]} $olddesc "
    the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
    fi
    done

    echo "The missing: " $the_missing
    echo "Compare script generated at: $script "
    #git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' ' 6. gitlog.find #!/bin/bash
    #
    # Find the upstream equivalent patch
    #
    # gitlog.find
    #
    cwd = $PWD
    param1 = $1
    ubranch = $2
    patches = 0
    script = / tmp / compare_upstream.sh
    echo "#!/bin/bash" > $script
    / usr / bin / chmod 755 $script
    echo "# Generated by gitbranchcmp3.sh" >> $script
    echo "# Run this script to compare the mismatched patches" >> $script
    echo " " >> $script
    echo "function compare_them()" >> $script
    echo "{" >> $script
    echo " cwd= $PWD " >> $script
    echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
    echo " cd ~/linux.git/fs/gfs2" >> $script
    echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
    echo " cd $cwd " >> $script
    echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
    echo "}" >> $script
    echo " " >> $script

    #echo "Gathering upstream patch info. Please wait."
    branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

    cd ~ / linux.git
    if test "X ${ubranch} " = "X" ; then
    ubranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
    fi
    utracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
    #
    # gather a list of gfs2 patches from master just in case we can't find it
    #
    #git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
    git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / gfs2 / > / tmp / gronk.gfs2
    # ms = in Linus's master
    git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / dlm / > / tmp / gronk.dlm

    cd $cwd
    LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
    for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
    / usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
    patches =$ ( echo $patches - 1 | bc ) ;
    for i in $LIST ; do
    if [ $patches -eq 1 ] ; then
    cnt = " ^"
    elif [ $patches -eq 0 ] ; then
    cnt = " H"
    else
    if [ $patches -lt 10 ] ; then
    cnt = " $patches "
    else
    cnt = " $patches "
    fi
    fi
    / usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
    desc = `/ usr / bin / git show --abbrev-commit -s --pretty =format: "%s" $i `
    cd ~ / linux.git
    cmp = 1
    up_eq = ` git log --reverse --abbrev-commit --pretty =format: "lo %h %<|(32)%an %s" $utracking .. $ubranch | grep " $desc " `
    # lo = in local for-next
    if test "X $up_eq " = "X" ; then
    up_eq = ` git log --reverse --abbrev-commit --pretty =format: "fn %h %<|(32)%an %s" master.. $utracking | grep " $desc " `
    # fn = in for-next for next merge window
    if test "X $up_eq " = "X" ; then
    up_eq = ` grep " $desc " / tmp / gronk.gfs2 `
    if test "X $up_eq " = "X" ; then
    up_eq = ` grep " $desc " / tmp / gronk.dlm `
    if test "X $up_eq " = "X" ; then
    up_eq = " Not found upstream"
    cmp = 0
    fi
    fi
    fi
    fi
    echo " $up_eq "
    if [ $cmp -eq 1 ] ; then
    UP_SHA1 = ` echo $up_eq | cut -d ' ' -f2 `
    echo "compare_them $UP_SHA1 $i " >> $script
    fi
    cd $cwd
    patches =$ ( echo $patches - 1 | bc )
    done
    echo "Compare script generated: $script "

    [Jul 12, 2020] How to add a Help facility to your Bash program - Opensource.com

    Jul 12, 2020 | opensource.com

    How to add a Help facility to your Bash program In the third article in this series, learn about using functions as you create a simple Help facility for your Bash script. 20 Dec 2019 David Both (Correspondent) Feed 53 up Image by : Opensource.com x Subscribe now

    Get the highlights in your inbox every week.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs. In the second article , you began the task of creating a fairly simple template that you can use as a starting point for other Bash programs, then explored ways to test it.

    This third of the four articles in this series explains how to create and use a simple Help function. While creating your Help facility, you will also learn about using functions and how to handle command-line options such as -h .

    Why Help? More on Bash Even fairly simple Bash programs should have some sort of Help facility, even if it is fairly rudimentary. Many of the Bash shell programs I write are used so infrequently that I forget the exact syntax of the command I need. Others are so complex that I need to review the options and arguments even when I use them frequently.

    Having a built-in Help function allows you to view those things without having to inspect the code itself. A good and complete Help facility is also a part of program documentation.

    About functions

    Shell functions are lists of Bash program statements that are stored in the shell's environment and can be executed, like any other command, by typing their name at the command line. Shell functions may also be known as procedures or subroutines, depending upon which other programming language you are using.

    Functions are called in scripts or from the command-line interface (CLI) by using their names, just as you would for any other command. In a CLI program or a script, the commands in the function execute when they are called, then the program flow sequence returns to the calling entity, and the next series of program statements in that entity executes.

    The syntax of a function is:

    FunctionName(){program statements}
    

    Explore this by creating a simple function at the CLI. (The function is stored in the shell environment for the shell instance in which it is created.) You are going to create a function called hw , which stands for "hello world." Enter the following code at the CLI and press Enter . Then enter hw as you would any other shell command:

    [ student @ testvm1 ~ ] $ hw (){ echo "Hi there kiddo" ; }
    [ student @ testvm1 ~ ] $ hw
    Hi there kiddo
    [ student @ testvm1 ~ ] $

    OK, so I am a little tired of the standard "Hello world" starter. Now, list all of the currently defined functions. There are a lot of them, so I am showing just the new hw function. When it is called from the command line or within a program, a function performs its programmed task and then exits and returns control to the calling entity, the command line, or the next Bash program statement in a script after the calling statement:

    [ student @ testvm1 ~ ] $ declare -f | less
    < snip >
    hw ()
    {
    echo "Hi there kiddo"
    }
    < snip >

    Remove that function because you do not need it anymore. You can do that with the unset command:

    [ student @ testvm1 ~ ] $ unset -f hw ; hw
    bash: hw: command not found
    [ student @ testvm1 ~ ] $ Creating the Help function

    Open the hello program in an editor and add the Help function below to the hello program code after the copyright statement but before the echo "Hello world!" statement. This Help function will display a short description of the program, a syntax diagram, and short descriptions of the available options. Add a call to the Help function to test it and some comment lines that provide a visual demarcation between the functions and the main portion of the program:

    ################################################################################
    # Help #
    ################################################################################
    Help ()
    {
    # Display Help
    echo "Add description of the script functions here."
    echo
    echo "Syntax: scriptTemplate [-g|h|v|V]"
    echo "options:"
    echo "g Print the GPL license notification."
    echo "h Print this Help."
    echo "v Verbose mode."
    echo "V Print software version and exit."
    echo
    }

    ################################################################################
    ################################################################################
    # Main program #
    ################################################################################
    ################################################################################

    Help
    echo "Hello world!"

    The options described in this Help function are typical for the programs I write, although none are in the code yet. Run the program to test it:

    [ student @ testvm1 ~ ] $ . / hello
    Add description of the script functions here.

    Syntax: scriptTemplate [ -g | h | v | V ]
    options:
    g Print the GPL license notification.
    h Print this Help.
    v Verbose mode.
    V Print software version and exit.

    Hello world !
    [ student @ testvm1 ~ ] $

    Because you have not added any logic to display Help only when you need it, the program will always display the Help. Since the function is working correctly, read on to add some logic to display the Help only when the -h option is used when you invoke the program at the command line.

    Handling options

    A Bash script's ability to handle command-line options such as -h gives some powerful capabilities to direct the program and modify what it does. In the case of the -h option, you want the program to print the Help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command (see How to program with Bash: Loops to learn more about while ) in conjunction with the getops and case commands.

    The getops command reads any and all options specified at the command line and creates a list of those options. In the code below, the while command loops through the list of options by setting the variable $options for each. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to evaluate the list of options until they have all been processed or it encounters an exit statement, which terminates the program.

    Be sure to delete the Help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this:

    ################################################################################
    ################################################################################
    # Main program #
    ################################################################################
    ################################################################################
    ################################################################################
    # Process the input options. Add options as needed. #
    ################################################################################
    # Get the options
    while getopts ":h" option; do
    case $option in
    h ) # display Help
    Help
    exit ;;
    esac
    done

    echo "Hello world!"

    Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option added to this case statement to delineate the end of each option.

    Testing

    Testing is now a little more complex. You need to test your program with a number of different options -- and no options -- to see how it responds. First, test with no options to ensure that it prints "Hello world!" as it should:

    [ student @ testvm1 ~ ] $ . / hello
    Hello world !

    That works, so now test the logic that displays the Help text:

    [ student @ testvm1 ~ ] $ . / hello -h
    Add description of the script functions here.

    Syntax: scriptTemplate [ -g | h | t | v | V ]
    options:
    g Print the GPL license notification.
    h Print this Help.
    v Verbose mode.
    V Print software version and exit.

    That works as expected, so try some testing to see what happens when you enter some unexpected options:

    [ student @ testvm1 ~ ] $ . / hello -x
    Hello world !
    [ student @ testvm1 ~ ] $ . / hello -q
    Hello world !
    [ student @ testvm1 ~ ] $ . / hello -lkjsahdf
    Add description of the script functions here.

    Syntax: scriptTemplate [ -g | h | t | v | V ]
    options:
    g Print the GPL license notification.
    h Print this Help.
    v Verbose mode.
    V Print software version and exit.

    [ student @ testvm1 ~ ] $

    The program just ignores any options without specific responses without generating any errors. But notice the last entry (with -lkjsahdf for options): because there is an h in the list of options, the program recognizes it and prints the Help text. This testing has shown that the program doesn't have the ability to handle incorrect input and terminate the program if any is detected.

    You can add another case stanza to the case statement to match any option that doesn't have an explicit match. This general case will match anything you have not provided a specific match for. The case statement now looks like this, with the catch-all match of \? as the last case. Any additional specific cases must precede this final one:

    while getopts ":h" option; do
    case $option in
    h ) # display Help
    Help
    exit ;;
    \? ) # incorrect option
    echo "Error: Invalid option"
    exit ;;
    esac
    done

    Test the program again using the same options as before and see how it works now.

    Where you are

    You have accomplished a good amount in this article by adding the capability to process command-line options and a Help procedure. Your Bash script now looks like this:

    #!/usr/bin/bash
    ################################################################################
    # scriptTemplate #
    # #
    # Use this template as the beginning of a new program. Place a short #
    # description of the script here. #
    # #
    # Change History #
    # 11/11/2019 David Both Original code. This is a template for creating #
    # new Bash shell scripts. #
    # Add new history entries as needed. #
    # #
    # #
    ################################################################################
    ################################################################################
    ################################################################################
    # #
    # Copyright (C) 2007, 2019 David Both #
    # [email protected] #
    # #
    # This program is free software; you can redistribute it and/or modify #
    # it under the terms of the GNU General Public License as published by #
    # the Free Software Foundation; either version 2 of the License, or #
    # (at your option) any later version. #
    # #
    # This program is distributed in the hope that it will be useful, #
    # but WITHOUT ANY WARRANTY; without even the implied warranty of #
    # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
    # GNU General Public License for more details. #
    # #
    # You should have received a copy of the GNU General Public License #
    # along with this program; if not, write to the Free Software #
    # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
    # #
    ################################################################################
    ################################################################################
    ################################################################################

    ################################################################################
    # Help #
    ################################################################################
    Help ()
    {
    # Display Help
    echo "Add description of the script functions here."
    echo
    echo "Syntax: scriptTemplate [-g|h|t|v|V]"
    echo "options:"
    echo "g Print the GPL license notification."
    echo "h Print this Help."
    echo "v Verbose mode."
    echo "V Print software version and exit."
    echo
    }

    ################################################################################
    ################################################################################
    # Main program #
    ################################################################################
    ################################################################################
    ################################################################################
    # Process the input options. Add options as needed. #
    ################################################################################
    # Get the options
    while getopts ":h" option; do
    case $option in
    h ) # display Help
    Help
    exit ;;
    \? ) # incorrect option
    echo "Error: Invalid option"
    exit ;;
    esac
    done

    echo "Hello world!"

    Be sure to test this version of the program very thoroughly. Use random inputs and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.

    Next time

    In this article, you added a Help function as well as the ability to process command-line options to display it selectively. The program is getting a little more complex, so testing is becoming more important and requires more test paths in order to be complete.

    The next article will look at initializing variables and doing a bit of sanity checking to ensure that the program will run under the correct set of conditions.

    [Jul 12, 2020] Navigating the Bash shell with pushd and popd - Opensource.com

    Notable quotes:
    "... directory stack ..."
    Jul 12, 2020 | opensource.com

    Navigating the Bash shell with pushd and popd Pushd and popd are the fastest navigational commands you've never heard of. 07 Aug 2019 Seth Kenlon (Red Hat) Feed 71 up 7 comments Image by : Opensource.com x Subscribe now

    Get the highlights in your inbox every week.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    The pushd and popd commands are built-in features of the Bash shell to help you "bookmark" directories for quick navigation between locations on your hard drive. You might already feel that the terminal is an impossibly fast way to navigate your computer; in just a few key presses, you can go anywhere on your hard drive, attached storage, or network share. But that speed can break down when you find yourself going back and forth between directories, or when you get "lost" within your filesystem. Those are precisely the problems pushd and popd can help you solve.

    pushd

    At its most basic, pushd is a lot like cd . It takes you from one directory to another. Assume you have a directory called one , which contains a subdirectory called two , which contains a subdirectory called three , and so on. If your current working directory is one , then you can move to two or three or anywhere with the cd command:

    $ pwd
    one
    $ cd two / three
    $ pwd
    three

    You can do the same with pushd :

    $ pwd
    one
    $ pushd two / three
    ~ / one / two / three ~ / one
    $ pwd
    three

    The end result of pushd is the same as cd , but there's an additional intermediate result: pushd echos your destination directory and your point of origin. This is your directory stack , and it is what makes pushd unique.

    Stacks

    A stack, in computer terminology, refers to a collection of elements. In the context of this command, the elements are directories you have recently visited by using the pushd command. You can think of it as a history or a breadcrumb trail.

    You can move all over your filesystem with pushd ; each time, your previous and new locations are added to the stack:

    $ pushd four
    ~ / one / two / three / four ~ / one / two / three ~ / one
    $ pushd five
    ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three ~ / one Navigating the stack

    Once you've built up a stack, you can use it as a collection of bookmarks or fast-travel waypoints. For instance, assume that during a session you're doing a lot of work within the ~/one/two/three/four/five directory structure of this example. You know you've been to one recently, but you can't remember where it's located in your pushd stack. You can view your stack with the +0 (that's a plus sign followed by a zero) argument, which tells pushd not to change to any directory in your stack, but also prompts pushd to echo your current stack:

    $ pushd + 0
    ~ / one / two / three / four ~ / one / two / three ~ / one ~ / one / two / three / four / five

    Alternatively, you can view the stack with the dirs command, and you can see the index number for each directory by using the -v option:

    $ dirs -v
    0 ~ / one / two / three / four
    1 ~ / one / two / three
    2 ~ / one
    3 ~ / one / two / three / four / five

    The first entry in your stack is your current location. You can confirm that with pwd as usual:

    $ pwd
    ~ / one / two / three / four

    Starting at 0 (your current location and the first entry of your stack), the second element in your stack is ~/one , which is your desired destination. You can move forward in your stack using the +2 option:

    $ pushd + 2
    ~ / one ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three
    $ pwd
    ~ / one

    This changes your working directory to ~/one and also has shifted the stack so that your new location is at the front.

    You can also move backward in your stack. For instance, to quickly get to ~/one/two/three given the example output, you can move back by one, keeping in mind that pushd starts with 0:

    $ pushd -0
    ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four Adding to the stack

    You can continue to navigate your stack in this way, and it will remain a static listing of your recently visited directories. If you want to add a directory, just provide the directory's path. If a directory is new to the stack, it's added to the list just as you'd expect:

    $ pushd / tmp
    / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

    But if it already exists in the stack, it's added a second time:

    $ pushd ~ / one
    ~ / one / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

    While the stack is often used as a list of directories you want quick access to, it is really a true history of where you've been. If you don't want a directory added redundantly to the stack, you must use the +N and -N notation.

    Removing directories from the stack

    Your stack is, obviously, not immutable. You can add to it with pushd or remove items from it with popd .

    For instance, assume you have just used pushd to add ~/one to your stack, making ~/one your current working directory. To remove the first (or "zeroeth," if you prefer) element:

    $ pwd
    ~ / one
    $ popd + 0
    / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four
    $ pwd
    ~ / one

    Of course, you can remove any element, starting your count at 0:

    $ pwd ~ / one
    $ popd + 2
    / tmp ~ / one / two / three ~ / one / two / three / four / five ~ / one / two / three / four
    $ pwd ~ / one

    You can also use popd from the back of your stack, again starting with 0. For example, to remove the final directory from your stack:

    $ popd -0
    / tmp ~ / one / two / three ~ / one / two / three / four / five

    When used like this, popd does not change your working directory. It only manipulates your stack.

    Navigating with popd

    The default behavior of popd , given no arguments, is to remove the first (zeroeth) item from your stack and make the next item your current working directory.

    This is most useful as a quick-change command, when you are, for instance, working in two different directories and just need to duck away for a moment to some other location. You don't have to think about your directory stack if you don't need an elaborate history:

    $ pwd
    ~ / one
    $ pushd ~ / one / two / three / four / five
    $ popd
    $ pwd
    ~ / one

    You're also not required to use pushd and popd in rapid succession. If you use pushd to visit a different location, then get distracted for three hours chasing down a bug or doing research, you'll find your directory stack patiently waiting (unless you've ended your terminal session):

    $ pwd ~ / one
    $ pushd / tmp
    $ cd { / etc, / var, / usr } ; sleep 2001
    [ ... ]
    $ popd
    $ pwd
    ~ / one Pushd and popd in the real world

    The pushd and popd commands are surprisingly useful. Once you learn them, you'll find excuses to put them to good use, and you'll get familiar with the concept of the directory stack. Getting comfortable with pushd was what helped me understand git stash , which is entirely unrelated to pushd but similar in conceptual intangibility.

    Using pushd and popd in shell scripts can be tempting, but generally, it's probably best to avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're re-reading a script ( pushd +3 is less clear than cd $HOME/$DIR/$TMP or similar).

    Aside from these warnings, if you're a regular Bash or Zsh user, then you can and should try pushd and popd . Bash prompt tips and tricks Here are a few hidden treasures you can use to customize your Bash prompt. Dave Neary (Red Hat) Topics Bash Linux Command line About the author Seth Kenlon - Seth Kenlon is an independent multimedia artist, free culture advocate, and UNIX geek. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project, http://slackermedia.info More about me Recommended reading
    Add videos as wallpaper on your Linux desktop

    Use systemd timers instead of cronjobs

    Why I stick with xterm

    Customizing my Linux terminal with tmux and Git

    Back up your phone's storage with this Linux utility

    Read and write data from anywhere with redirection in the Linux terminal
    7 Comments


    matt on 07 Aug 2019

    Thank you for the write up for pushd and popd. I gotta remember to use these when I'm jumping around directories a lot. I got a hung up on a pushd example because my development work using arrays differentiates between the index and the count. In my experience, a zero-based array of A, B, C; C has an index of 2 and also is the third element. C would not be considered the second element cause that would be confusing it's index and it's count.

    Seth Kenlon on 07 Aug 2019

    Interesting point, Matt. The difference between count and index had not occurred to me, but I'll try to internalise it. It's a great distinction, so thanks for bringing it up!

    Greg Pittman on 07 Aug 2019

    This looks like a recipe for confusing myself.

    Seth Kenlon on 07 Aug 2019

    It can be, but start out simple: use pushd to change to one directory, and then use popd to go back to the original. Sort of a single-use bookmark system.

    Then, once you're comfortable with pushd and popd, branch out and delve into the stack.

    A tcsh shell I used at an old job didn't have pushd and popd, so I used to have functions in my .cshrc to mimic just the back-and-forth use.

    Jake on 07 Aug 2019

    "dirs" can be also used to view the stack. "dirs -v" helpfully numbers each directory with its index.

    Seth Kenlon on 07 Aug 2019

    Thanks for that tip, Jake. I arguably should have included that in the article, but I wanted to try to stay focused on just the two {push,pop}d commands. Didn't occur to me to casually mention one use of dirs as you have here, so I've added it for posterity.

    There's so much in the Bash man and info pages to talk about!

    other_Stu on 11 Aug 2019

    I use "pushd ." (dot for current directory) quite often. Like a working directory bookmark when you are several subdirectories deep somewhere, and need to cd to couple of other places to do some work or check something.
    And you can use the cd command with your DIRSTACK as well, thanks to tilde expansion.
    cd ~+3 will take you to the same directory as pushd +3 would.

    [Jul 12, 2020] An introduction to parameter expansion in Bash - Opensource.com

    Jul 12, 2020 | opensource.com

    An introduction to parameter expansion in Bash Get started with this quick how-to guide on expansion modifiers that transform Bash variables and other parameters into powerful tools beyond simple value stores. 13 Jun 2017 James Pannacciulli Feed 366 up 4 comments Image by : Opensource.com x Subscribe now

    Get the highlights in your inbox every week.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    In Bash, entities that store values are known as parameters. Their values can be strings or arrays with regular syntax, or they can be integers or associative arrays when special attributes are set with the declare built-in. There are three types of parameters: positional parameters, special parameters, and variables.

    More Linux resources

    For the sake of brevity, this article will focus on a few classes of expansion methods available for string variables, though these methods apply equally to other types of parameters.

    Variable assignment and unadulterated expansion

    When assigning a variable, its name must be comprised solely of alphanumeric and underscore characters, and it may not begin with a numeral. There may be no spaces around the equal sign; the name must immediately precede it and the value immediately follow:

    $ variable_1="my content"
    

    Storing a value in a variable is only useful if we recall that value later; in Bash, substituting a parameter reference with its value is called expansion. To expand a parameter, simply precede the name with the $ character, optionally enclosing the name in braces:

    $ echo $variable_1 ${variable_1} my content my content
    

    Crucially, as shown in the above example, expansion occurs before the command is called, so the command never sees the variable name, only the text passed to it as an argument that resulted from the expansion. Furthermore, parameter expansion occurs before word splitting; if the result of expansion contains spaces, the expansion should be quoted to preserve parameter integrity, if desired:

    $ printf "%s\n" ${variable_1} my content $ printf "%s\n" "${variable_1}" my content
    
    Parameter expansion modifiers

    Parameter expansion goes well beyond simple interpolation, however. Inside the braces of a parameter expansion, certain operators, along with their arguments, may be placed after the name, before the closing brace. These operators may invoke conditional, subset, substring, substitution, indirection, prefix listing, element counting, and case modification expansion methods, modifying the result of the expansion. With the exception of the reassignment operators ( = and := ), these operators only affect the expansion of the parameter without modifying the parameter's value for subsequent expansions.

    About conditional, substring, and substitution parameter expansion operators Conditional parameter expansion

    Conditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions -- each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.

    parameter expansion unset var var="" var="gnu"
    ${var-default} default -- gnu
    ${var:-default} default default gnu
    ${var+alternate} -- alternate alternate
    ${var:+alternate} -- -- alternate
    ${var?error} error -- gnu
    ${var:?error} error error gnu

    The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.

    As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:

    $ echo ${EDITOR} /usr/bin/vi $ echo ${EDITOR:-$(which nano)} /usr/bin/vi $ unset EDITOR $ echo ${EDITOR:-$(which nano)} /usr/bin/nano
    

    Building on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:

    $ ${EDITOR:-$(which nano)} ${OUT_FILE:?Missing filename} bash: OUT_FILE: Missing filename
    
    Substring parameter expansion

    Parameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:

    $ location="CA 90095" $ echo "Zip Code: ${location:3}" Zip Code: 90095 $ echo "Zip Code: ${location:(-5)}" Zip Code: 90095 $ echo "State: ${location:0:2}" State: CA
    

    Another way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and % operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.

    var="open source"
    parameter expansion offset of 5
    length of 4
    ${var:offset} source
    ${var:offset:length} sour
    pattern of *o?
    ${var#pattern} en source
    ${var##pattern} rce
    pattern of ?e*
    ${var%pattern} open sour
    ${var%pattern} o

    The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. [[:alnum:]] . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:

    $ echo $PATH /usr/local/bin:/usr/bin:/bin $ echo "Lowest priority in PATH: ${PATH##*:}" Lowest priority in PATH: /bin $ echo "Everything except lowest priority: ${PATH%:*}" Everything except lowest priority: /usr/local/bin:/usr/bin $ echo "Highest priority in PATH: ${PATH%:*}" Highest priority in PATH: /usr/local/bin
    
    Substitution in parameter expansion

    The same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.

    var="free and open"
    parameter expansion pattern of [[:space:]]
    string of _
    ${var/pattern/string} free_and open
    ${var//pattern/string} free_and_open

    The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions. Topics Linux About the author James Pannacciulli - James Pannacciulli is an advocate for software freedom & user autonomy with an MA in Linguistics. Employed as a Systems Engineer in Los Angeles, in his free time he occasionally gives talks on bash usage at various conferences. James likes his beers sour and his nettles stinging. More from James may be found on his home page . He has presented at conferences including SCALE ,...

    [Jul 12, 2020] A sysadmin's guide to Bash by Maxim Burgerhout

    Jul 12, 2020 | opensource.com

    Use aliases

    ... ... ...

    Make your root prompt stand out

    ... ... ...

    Control your history

    You probably know that when you press the Up arrow key in Bash, you can see and reuse all (well, many) of your previous commands. That is because those commands have been saved to a file called .bash_history in your home directory. That history file comes with a bunch of settings and commands that can be very useful.

    First, you can view your entire recent command history by typing history , or you can limit it to your last 30 commands by typing history 30 . But that's pretty vanilla. You have more control over what Bash saves and how it saves it.

    For example, if you add the following to your .bashrc, any commands that start with a space will not be saved to the history list:

    HISTCONTROL=ignorespace
    

    This can be useful if you need to pass a password to a command in plaintext. (Yes, that is horrible, but it still happens.)

    If you don't want a frequently executed command to show up in your history, use:

    HISTCONTROL=ignorespace:erasedups
    

    With this, every time you use a command, all its previous occurrences are removed from the history file, and only the last invocation is saved to your history list.

    A history setting I particularly like is the HISTTIMEFORMAT setting. This will prepend all entries in your history file with a timestamp. For example, I use:

    HISTTIMEFORMAT="%F %T  "
    

    When I type history 5 , I get nice, complete information, like this:

    1009 2018 -06- 11 22 : 34 : 38 cat / etc / hosts
    1010 2018 -06- 11 22 : 34 : 40 echo $foo
    1011 2018 -06- 11 22 : 34 : 42 echo $bar
    1012 2018 -06- 11 22 : 34 : 44 ssh myhost
    1013 2018 -06- 11 22 : 34 : 55 vim .bashrc

    That makes it a lot easier to browse my command history and find the one I used two days ago to set up an SSH tunnel to my home lab (which I forget again, and again, and again ).

    Best Bash practices

    I'll wrap this up with my top 11 list of the best (or good, at least; I don't claim omniscience) practices when writing Bash scripts.

    1. Bash scripts can become complicated and comments are cheap. If you wonder whether to add a comment, add a comment. If you return after the weekend and have to spend time figuring out what you were trying to do last Friday, you forgot to add a comment.

    1. Wrap all your variable names in curly braces, like ${myvariable} . Making this a habit makes things like ${variable}_suffix possible and improves consistency throughout your scripts.
    1. Do not use backticks when evaluating an expression; use the $() syntax instead. So use:
      for  file in $(ls); do
      
      not
      for  file in `ls`; do
      
      The former option is nestable, more easily readable, and keeps the general sysadmin population happy. Do not use backticks.
    1. Consistency is good. Pick one style of doing things and stick with it throughout your script. Obviously, I would prefer if people picked the $() syntax over backticks and wrapped their variables in curly braces. I would prefer it if people used two or four spaces -- not tabs -- to indent, but even if you choose to do it wrong, do it wrong consistently.
    1. Use the proper shebang for a Bash script. As I'm writing Bash scripts with the intention of only executing them with Bash, I most often use #!/usr/bin/bash as my shebang. Do not use #!/bin/sh or #!/usr/bin/sh . Your script will execute, but it'll run in compatibility mode -- potentially with lots of unintended side effects. (Unless, of course, compatibility mode is what you want.)
    1. When comparing strings, it's a good idea to quote your variables in if-statements, because if your variable is empty, Bash will throw an error for lines like these: if [ ${myvar} == "foo" ] ; then
      echo "bar"
      fi And will evaluate to false for a line like this: if [ " ${myvar} " == "foo" ] ; then
      echo "bar"
      fi Also, if you are unsure about the contents of a variable (e.g., when you are parsing user input), quote your variables to prevent interpretation of some special characters and make sure the variable is considered a single word, even if it contains whitespace.
    1. This is a matter of taste, I guess, but I prefer using the double equals sign ( == ) even when comparing strings in Bash. It's a matter of consistency, and even though -- for string comparisons only -- a single equals sign will work, my mind immediately goes "single equals is an assignment operator!"
    1. Use proper exit codes. Make sure that if your script fails to do something, you present the user with a written failure message (preferably with a way to fix the problem) and send a non-zero exit code: # we have failed
      echo "Process has failed to complete, you need to manually restart the whatchamacallit"
      exit 1 This makes it easier to programmatically call your script from yet another script and verify its successful completion.
    1. Use Bash's built-in mechanisms to provide sane defaults for your variables or throw errors if variables you expect to be defined are not defined: # this sets the value of $myvar to redhat, and prints 'redhat'
      echo ${myvar:=redhat} # this throws an error reading 'The variable myvar is undefined, dear reader' if $myvar is undefined
      ${myvar:?The variable myvar is undefined, dear reader}
    1. Especially if you are writing a large script, and especially if you work on that large script with others, consider using the local keyword when defining variables inside functions. The local keyword will create a local variable, that is one that's visible only within that function. This limits the possibility of clashing variables.
    1. Every sysadmin must do it sometimes: debug something on a console, either a real one in a data center or a virtual one through a virtualization platform. If you have to debug a script that way, you will thank yourself for remembering this: Do not make the lines in your scripts too long!

      On many systems, the default width of a console is still 80 characters. If you need to debug a script on a console and that script has very long lines, you'll be a sad panda. Besides, a script with shorter lines -- the default is still 80 characters -- is a lot easier to read and understand in a normal editor, too!


    I truly love Bash. I can spend hours writing about it or exchanging nice tricks with fellow enthusiasts. Make sure you drop your favorites in the comments!

    [Jul 12, 2020] My favorite Bash hacks

    Jan 09, 2020 | opensource.com

    Get the highlights in your inbox every week.

    When you work with computers all day, it's fantastic to find repeatable commands and tag them for easy use later on. They all sit there, tucked away in ~/.bashrc (or ~/.zshrc for Zsh users ), waiting to help improve your day!

    In this article, I share some of my favorite of these helper commands for things I forget a lot, in hopes that they will save you, too, some heartache over time.

    Say when it's over

    When I'm using longer-running commands, I often multitask and then have to go back and check if the action has completed. But not anymore, with this helpful invocation of say (this is on MacOS; change for your local equivalent):

    function looooooooong {
    START=$(date +%s.%N)
    $*
    EXIT_CODE=$?
    END=$(date +%s.%N)
    DIFF=$(echo "$END - $START" | bc)
    RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
    result="$1 completed in $RES, exit code $EXIT_CODE."
    echo -e "\n⏰ $result"
    ( say -r 250 $result 2>&1 > /dev/null & )
    }

    This command marks the start and end time of a command, calculates the minutes it takes, and speaks the command invoked, the time taken, and the exit code. I find this super helpful when a simple console bell just won't do.

    ... ... ...

    There are many Docker commands, but there are even more docker compose commands. I used to forget the --rm flags, but not anymore with these useful aliases:

    alias dc = "docker-compose"
    alias dcr = "docker-compose run --rm"
    alias dcb = "docker-compose run --rm --build" gcurl helper for Google Cloud

    This one is relatively new to me, but it's heavily documented . gcurl is an alias to ensure you get all the correct flags when using local curl commands with authentication headers when working with Google Cloud APIs.

    Git and ~/.gitignore

    I work a lot in Git, so I have a special section dedicated to Git helpers.

    One of my most useful helpers is one I use to clone GitHub repos. Instead of having to run:

    git clone [email protected]:org/repo /Users/glasnt/git/org/repo

    I set up a clone function:

    clone(){
    echo Cloning $1 to ~/git/$1
    cd ~/git
    git clone [email protected]:$1 $1
    cd $1
    }

    ... ... ...

    [Jul 11, 2020] Own your own content Vallard's Blog

    Jul 11, 2020 | benincosa.com

    Posted on December 31, 2019 by Vallard

    Reading this morning on Hacker News was this article on how the old Internet has died because we trusted all our content to Facebook and Google. While hyperbole abounds in the headline and there are plenty of internet things out there that aren't owned by Google nor Facebook (including this AWS free blog) it is true much of the information and content is in the hands of a giant Ad serving service and a social echo chamber. (well that is probably too harsh)

    I heard this advice many years ago that you should own your own content. While there isn't much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make changes.

    But of course, that is silly! Why not publish on Medium like everyone else? Or publish on someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to me, the point is freedom. Freedom to express, do what I want, say what I will with no restrictions. The ability to own what I say and freedom from others monetizing me directly. There's no walled garden and anyone can access the content I write in my own little funzone.

    While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy. In the next decade, whether this blog remains up or is shut down, is not dependent upon the fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or not. If I change my views, I can delete it. It won't just sit on the Internet because someone else's terms of service agreement changed. I am in control, I am in charge. That to me is important and the reason I run this blog, don't use other people's services, and why I advocate for owning your own content.

    [Jul 10, 2020] I-O reporting from the Linux command line by Tyler Carrigan

    Jul 10, 2020 | www.redhat.com

    I/O reporting from the Linux command line Learn the iostat tool, its common command-line flags and options, and how to use it to better understand input/output performance in Linux.

    Posted: July 9, 2020 | by Tyler Carrigan (Red Hat)

    Image

    Image by Pexels

    More Linux resources

    If you have followed my posts here at Enable Sysadmin, you know that I previously worked as a storage support engineer. One of my many tasks in that role was to help customers replicate backups from their production environments to dedicated backup storage arrays. Many times, customers would contact me concerned about the speed of the data transfer from production to storage.

    Now, if you have ever worked in support, you know that there can be many causes for a symptom. However, the throughput of a system can have huge implications for massive data transfers. If all is well, we are talking hours, if not... I have seen a single replication job take months.

    We know that Linux is loaded full of helpful tools for all manner of issues. For input/output monitoring, we use the iostat command. iostat is a part of the sysstat package and is not loaded on all distributions by default.

    Installation and base run

    I am using Red Hat Enterprise Linux 8 here and have included the install output below.

    [ Want to try out Red Hat Enterprise Linux? Download it now for free. ]

    NOTE : the command runs automatically after installation.

    [root@rhel ~]# iostat
    bash: iostat: command not found...
    Install package 'sysstat' to provide command 'iostat'? [N/y] y
        
        
     * Waiting in queue... 
    The following packages have to be installed:
    lm_sensors-libs-3.4.0-21.20180522git70f7e08.el8.x86_64    Lm_sensors core libraries
    sysstat-11.7.3-2.el8.x86_64    Collection of performance monitoring tools for Linux
    Proceed with changes? [N/y] y
        
        
     * Waiting in queue... 
     * Waiting for authentication... 
     * Waiting in queue... 
     * Downloading packages... 
     * Requesting data... 
     * Testing changes... 
     * Installing packages... 
    Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               2.17    0.05    4.09    0.65    0.00   83.03
        
    Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
    sda             206.70      8014.01      1411.92    1224862     215798
    sdc               0.69        20.39         0.00       3116          0
    sdb               0.69        20.39         0.00       3116          0
    dm-0            215.54      7917.78      1449.15    1210154     221488
    dm-1              0.64        14.52         0.00       2220          0
    

    If you run the base command without options, iostat displays CPU usage information. It also displays I/O stats for each partition on the system. The output includes totals, as well as per second values for both read and write operations. Also, note that the tps field is the total number of Transfers per second issued to a specific device.

    The practical application is this: if you know what hardware is used, then you know what parameters it should be operating within. Once you combine this knowledge with the output of iostat , you can make changes to your system accordingly.

    Interval runs

    It can be useful in troubleshooting or data gathering phases to have a report run at a given interval. To do this, run the command with the interval (in seconds) at the end:

    [root@rhel ~]# iostat -m 10
    Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.94    0.05    0.35    0.04    0.00   98.62
        
    Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda              12.18         0.44         0.12       1212        323
    sdc               0.04         0.00         0.00          3          0
    sdb               0.04         0.00         0.00          3          0
    dm-0             12.79         0.43         0.12       1197        329
    dm-1              0.04         0.00         0.00          2          0
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.24    0.00    0.15    0.00    0.00   99.61
        
    Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda               0.00         0.00         0.00          0          0
    sdc               0.00         0.00         0.00          0          0
    sdb               0.00         0.00         0.00          0          0
    dm-0              0.00         0.00         0.00          0          0
    dm-1              0.00         0.00         0.00          0          0
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               0.20    0.00    0.18    0.00    0.00   99.62
        
    Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda               0.50         0.00         0.00          0          0
    sdc               0.00         0.00         0.00          0          0
    sdb               0.00         0.00         0.00          0          0
    dm-0              0.50         0.00         0.00          0          0
    dm-1              0.00         0.00         0.00          0          0
    

    The above output is from a 30-second run.

    You must use Ctrl + C to exit the run.

    Easy reading

    To clean up the output and make it easier to digest, use the following options:

    -m changes the output to megabytes, which is a bit easier to read and is usually better understood by customers or managers.

    [root@rhel ~]# iostat -m
    Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.51    0.09    0.55    0.07    0.00   97.77
        
    Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda              22.23         0.81         0.21       1211        322
    sdc               0.07         0.00         0.00          3          0
    sdb               0.07         0.00         0.00          3          0
    dm-0             23.34         0.80         0.22       1197        328
    dm-1              0.07         0.00         0.00          2          0
    

    -p allows you to specify a particular device to focus in on. You can combine this option with the -m for a nice and tidy look at a particularly concerning device and its partitions.

    [root@rhel ~]# iostat -m -p sda
    Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.19    0.07    0.45    0.06    0.00   98.24
        
    Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda              17.27         0.63         0.17       1211        322
    sda2             16.83         0.62         0.17       1202        320
    sda1              0.10         0.00         0.00          7          2
    
    Advanced stats

    If the default values just aren't getting you the information you need, you can use the -x flag to view extended statistics:

    [root@rhel ~]# iostat -m -p sda -x 
    Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
        
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.06    0.06    0.40    0.05    0.00   98.43
        
    Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
    sda             12.20    2.83      0.54      0.14     0.02     0.92   0.16  24.64    0.55    0.50   0.00    45.58    52.37   0.46   0.69
    sda2            12.10    2.54      0.54      0.14     0.02     0.92   0.16  26.64    0.55    0.47   0.00    45.60    57.88   0.47   0.68
    sda1             0.08    0.01      0.00      0.00     0.00     0.00   0.00  23.53    0.44    1.00   0.00    43.00   161.08   0.57   0.00
    

    Some of the options to pay attention to here are:

    There are other values present, but these are the ones to look out for.

    Shutting down

    This article covers just about everything you need to get started with iostat . If you have other questions or need further explanations of options, be sure to check out the man page or your preferred search engine. For other Linux tips and tricks, keep an eye on Enable Sysadmin!

    [Jul 09, 2020] Bash Shortcuts Gem by Ian Miell

    Jul 09, 2020 | zwischenzugs.com

    TL;DR

    These commands can tell you what key bindings you have in your bash shell by default.

    bind -P | grep 'can be'
    stty -a | grep ' = ..;'
    
    Background

    I'd aways wondered what key strokes did what in bash – I'd picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn't remember many of them anyway.

    Then debugging a problem tab completion in 'here' documents, I stumbled across bind.

    bind and stty

    'bind' is a bash builtin, which means it's not a program like awk or grep, but is picked up and handled by the bash program itself.

    It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).

    Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:

    stty -a | grep ' = ..;'
    

    These take precedence and can be confusing if you've tried to bind the same thing in your shell! Further confusion is caused by the fact that '^D' means 'CTRL and d pressed together whereas in bind output, it would be 'C-d'.

    edit: am indebted to joepvd from hackernews for this beauty

        $ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/'
        intr = ^C
        quit = ^
        erase = ^?
        kill = ^U
        eof = ^D
        swtch = ^Z
        susp = ^Z
        rprnt = ^R
        werase = ^W
        lnext = ^V
        flush = ^O
    
    Breaking Down the Command
    bind -P | grep can
    

    Can be considered (almost) equivalent to a more instructive command:

    bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can
    

    'bind -l' lists all the available keystroke functions. For example, 'complete' is the auto-complete function normally triggered by hitting 'tab' twice. The output of this is passed to a sed command which passes each function name to 'bind -q', which queries the bindings.

    sed 's/.*/bind -q /'
    

    The output of this is passed for running into /bin/bash.

    /bin/bash 2>&1 | grep -v warning: | grep 'can be'
    

    Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.

    The '2>&1' puts the error output (the warnings) to the same output channel, filtering out warnings with a 'grep -v' and then filtering on output that describes how to trigger the function.

    In the output of bind -q, 'C-' means 'the ctrl key and'. So 'C-c' is the normal. Similarly, 't' means 'escape', so 'tt' means 'autocomplete', and 'e' means escape:

    $ bind -q complete
    complete can be invoked via "C-i", "ee".
    

    and is also bound to 'C-i' (though on my machine I appear to need to press it twice – not sure why).

    Add to bashrc

    I added this alias as 'binds' in my bashrc so I could easily get hold of this list in the future.

    alias binds="bind -P | grep 'can be'"
    

    Now whenever I forget a binding, I type 'binds', and have a read :)

    [adinserter block="1″]

    The Zinger

    Browsing through the bash manual, I noticed that an option to bind enables binding to

    -x keyseq:shell-command
    

    So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):

    bind -x '"C-xC-o":bind -P | grep can'
    

    Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues

    Now I'm going to sort through my history to see what I type most often :)

    This post is based on material from Docker in Practice , available on Manning's Early Access Program. Get 39% off with the code: 39miell

    [Jul 09, 2020] My Favourite Secret Weapon strace

    Jul 09, 2020 | zwischenzugs.com

    Why strace ?

    I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.

    My favourite secret weapon in this path to understanding is strace.

    strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS) calls your program is making.

    An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.

    Usage Patterns

    strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.

    My Netcat Server Won't Start!

    Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.

    Simply running through strace will likely give you clues as to what's gone on.

    $  nc -l localhost 80
    nc: Permission denied

    Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).

    Simply put strace at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):

     $ strace nc -l localhost 80
     execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0
     brk(0)                                  = 0x1e7a000
     access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
     mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000
     access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
     open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
     stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory)
     [...]
     open("libglib-2.0.so.0", O_RDONLY)      = -1 ENOENT (No such file or directory)
     open("/etc/ld.so.cache", O_RDONLY)      = 3
     fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0
     mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
     close(3)                                = 0
     access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
     open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3
     read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832
     fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0
     mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000
     mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0
     mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000
     mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000
     close(3)                                = 0
     open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
    [...]
     mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
     close(3)                                = 0
     access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
     open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3
     read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832
     fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0
     mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000
     mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0
     mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000
     close(3)                                = 0
     mprotect(0x7f751babb000, 4096, PROT_READ) = 0
     munmap(0x7f751c994000, 179820)          = 0
     open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 3
     fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)
     fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0
     mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
     read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315
     read(3, "", 4096)                       = 0
     close(3)                                = 0
     munmap(0x7f751c9bf000, 4096)            = 0
     open("/etc/gai.conf", O_RDONLY)         = 3
     fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
     fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
     mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
     read(3, "# Configuration for getaddrinfo("..., 4096) = 3343
     read(3, "", 4096)                       = 0
     close(3)                                = 0
     munmap(0x7f751c9bf000, 4096)            = 0
     futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0
     socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
     connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
     getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
     close(3)                                = 0
     socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
     connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
     getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
     close(3)                                = 0
     socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
     setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
     bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
     close(3)                                = 0
     socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
     setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
     bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
     close(3)                                = 0
     write(2, "nc: ", 4nc: )                     = 4
     write(2, "Permission denied\n", 18Permission denied
     )     = 18
     exit_group(1)                           = ?
    

    To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.

    For each line:

    open("/etc/gai.conf", O_RDONLY)         = 3

    Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf and the constant O_RDONLY , and the return value was 3 .

    How to make sense of this?

    Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.

    In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":

     bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
     close(3)                                = 0
     socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
     setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
     bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
     close(3)                                = 0
     write(2, "nc: ", 4nc: )                     = 4
     write(2, "Permission denied\n", 18Permission denied
     )     = 18
     exit_group(1)                           = ?

    We might therefore want to understand what "bind" is and why it might be failing.

    You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the manpages-dev package, and can be invoked by eg ​​ man 2 bind (I just used strace to determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.

    Right there in my man 2 bind page it says:

    ERRORS
    EACCES The address is protected, and the user is not the superuser.

    So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.

    My Library Is Not Loading!

    Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
    In this case, we strace the output on developer B's computer to see how it's working:

    $ strace perl a.pl
    execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0
    brk(0)                                  = 0xa8f000
    [...]fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
    fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0
    rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
    brk(0xad1000)                           = 0xad1000
    read(3, "use blahlib;\n\n", 4096)       = 14
    stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory)
    stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0
    open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
    ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device)
    [...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000
    close(5)                                = 0
    mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0
    brk(0xb55000)                           = 0xb55000
    read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596
    brk(0xb77000)                           = 0xb77000
    read(4, "", 4096)                       = 0
    close(4)                                = 0
    read(3, "", 4096)                       = 0
    close(3)                                = 0
    exit_group(0)                           = ?

    We observe that the file is found in what looks like an unusual place.

    open("/space/myperllib/blahlib.pm", O_RDONLY) = 4

    Inspecting the environment, we see that:

    $ env | grep myperl
    PERL5LIB=/space/myperllib

    So the solution is to set the same env variable before running:

    export PERL5LIB=/space/myperllib
    Get to know the internals bit by bit

    If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file /etc/gai.conf , which I'd never come across before writing this.

    Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully) browse C code more and more understanding will grow.

    Gotchas

    If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.

    If you're on solaris, this program doesn't exist – you need to use truss instead.

    Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.

    Other useful tidbits

    You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with -p .

    If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and -T ) can help significantly.

    vasudevram February 11, 2018 at 5:29 pm

    Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:

    access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

    vasudevram February 11, 2018 at 5:29 pm

    I guess that could trigger the other errors.

    Benji Wiebe February 11, 2018 at 7:30 pm

    A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.

    vasudevram February 11, 2018 at 8:24 pm

    >A failed access or open system call is not usually an error in the context of launching a program.

    Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.

    >Generally it is merely checking if a config file exists.

    The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.

    [Jul 08, 2020] Exit Codes With Special Meanings

    Jul 08, 2020 | www.tldp.org

    Appendix E. Exit Codes With Special Meanings Table E-1. Reserved Exit Codes

    Exit Code Number Meaning Example Comments
    1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero" and other impermissible operations
    2 Misuse of shell builtins (according to Bash documentation) empty_function() {} Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison ).
    126 Command invoked cannot execute /dev/null Permission problem or command is not an executable
    127 "command not found" illegal_command Possible problem with $PATH or a typo
    128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see first footnote)
    128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9)
    130 Script terminated by Control-C Ctl-C Control-C is fatal error signal 2 , (130 = 128 + 2, see above)
    255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255

    According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings, and should therefore be avoided for user-specified exit parameters. Ending a script with exit 127 would certainly cause confusion when troubleshooting (is the error code a "command not found" or a user-defined one?). However, many scripts use an exit 1 as a general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not particularly useful in debugging.

    There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more straightforward. [2] All user-defined exit codes in the accompanying examples to this document conform to this standard, except where overriding circumstances exist, as in Example 9-2 .

    Issuing a $? from the command-line after a shell script exits gives results consistent with the table above only from the Bash or sh prompt. Running the C-shell or tcsh may give different values in some cases.
    Notes
    [1] Out of range exit values can result in unexpected exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit 3809 gives an exit code of 225 (3809 % 256 = 225).
    [2] An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

    [Jul 08, 2020] Exit Codes

    From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
    For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
    If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
    If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
    The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
    All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
    Jul 08, 2020 | zwischenzugs.com

    Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

    Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

    1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

    grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

    or

    grep not_there /dev/null
    echo $?

    $? is a special bash variable that's set to the exit code of each command after it runs.

    Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

    [Jul 08, 2020] Returning Values from Bash Functions by Mitch Frazier

    Sep 11, 2009 | www.linuxjournal.com

    Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.

    Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.

    The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:

    function myfunc()
    {
        myresult='some value'
    }
    
    myfunc
    echo $myresult
    

    The code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.

    A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:

    function myfunc()
    {
        local  myresult='some value'
        echo "$myresult"
    }
    
    result=$(myfunc)   # or result=`myfunc`
    echo $result
    

    Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.

    The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:

    function myfunc()
    {
        local  __resultvar=$1
        local  myresult='some value'
        eval $__resultvar="'$myresult'"
    }
    
    myfunc result
    echo $result
    

    Since we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.

    When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar ). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:

    function myfunc()
    {
        local  result=$1
        local  myresult='some value'
        eval $result="'$myresult'"
    }
    
    myfunc result
    echo $result
    

    The reason it doesn't work is because when eval does the second interpretation and evaluates result='some value' , result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.

    For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:

    function myfunc()
    {
        local  __resultvar=$1
        local  myresult='some value'
        if [[ "$__resultvar" ]]; then
            eval $__resultvar="'$myresult'"
        else
            echo "$myresult"
        fi
    }
    
    myfunc result
    echo $result
    result2=$(myfunc)
    echo $result2

    Here, if no variable name is passed to the function, the value is output to the standard output.

    Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend of Linux Journal since the early 2000s.


    David Krmpotic6 years ago • edited ,

    This is the best way: http://stackoverflow.com/a/... return by reference:

    function pass_back_a_string() {
    eval "$1='foo bar rab oof'"
    }

    return_var=''
    pass_back_a_string return_var
    echo $return_var

    lxw David Krmpotic6 years ago ,

    I agree. After reading this passage, the same idea with yours occurred to me.

    phil • 6 years ago ,

    Since this page is a top hit on google:

    The only real issue I see with returning via echo is that forking the process means no longer allowing it access to set 'global' variables. They are still global in the sense that you can retrieve them and set them within the new forked process, but as soon as that process is done, you will not see any of those changes.

    e.g.
    #!/bin/bash

    myGlobal="very global"

    call1() {
    myGlobal="not so global"
    echo "${myGlobal}"
    }

    tmp=$(call1) # keep in mind '$()' starts a new process

    echo "${tmp}" # prints "not so global"
    echo "${myGlobal}" # prints "very global"

    lxw • 6 years ago ,

    Hello everyone,

    In the 3rd method, I don't think the local variable __resultvar is necessary to use. Any problems with the following code?

    function myfunc()
    {
    local myresult='some value'
    eval "$1"="'$myresult'"
    }

    myfunc result
    echo $result

    code_monk6 years ago • edited ,

    i would caution against returning integers with "return $int". My code was working fine until it came across a -2 (negative two), and treated it as if it were 254, which tells me that bash functions return 8-bit unsigned ints that are not protected from overflow

    Emil Vikström code_monk5 years ago ,

    A function behaves as any other Bash command, and indeed POSIX processes. That is, they can write to stdout, read from stdin and have a return code. The return code is, as you have already noticed, a value between 0 and 255. By convention 0 means success while any other return code means failure.

    This is also why Bash "if" statements treat 0 as success and non+zero as failure (most other programming languages do the opposite).

    [Jul 07, 2020] The Missing Readline Primer by Ian Miell

    Highly recommended!
    This is from the book Learn Bash the Hard Way, available for $6.99.
    Jul 07, 2020 | zwischenzugs.com

    The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes

    Readline is one of those technologies that is so commonly used many users don't realise it's there.

    I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.

    Bash Without Readline

    First you're going to see what bash looks like without readline.

    In your 'normal' bash shell, hit the TAB key twice. You should see something like this:

        Display all 2335 possibilities? (y or n)
    

    That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.

    Hit n to get out of that autocomplete.

    Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

    Now type:

    $ bash --noediting
    

    The --noediting flag starts up bash without the readline library enabled.

    If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

    Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

    Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

    $ exit
    
    Other Shortcuts

    There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.

    $ echo 'some command'
    

    There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

    If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

    Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I use these two dozens of times a day.

    Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.

    The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r , and then type ec . You should see the echo command you just ran like this:

        (reverse-i-search)`ec': echo echo
    

    Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

    There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts

    If you type:

    $ bind -p
    

    You will see a list of bindings that readline is capable of. There's a lot of them!

    Have a read through if you're interested, but don't worry about understanding them all yet.

    If you type:

    $ bind -p | grep C-a
    

    you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.

    As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

    If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \e refers to the Esc key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

        "\e?": possible-completions
    

    in the bind -p output.

    Readline and Terminal Options

    If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

        "\C-r": reverse-search-history
    

    You might also have seen that there is another binding that allows you to search forward through your history too:

        "\C-s": forward-search-history
    

    What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

    Watch out though! Hitting \C-s to search forward through the history might well not work for you.

    Why is this, if the binding is there and readline is switched on?

    It's because something picked up the \C-s before it got to the readline library: the terminal settings.

    The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

    If you type:

    $ stty -e
    

    you should get output similar to this:

    speed 9600 baud; 47 rows; 202 columns;
    lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
    iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
    oflags: opost onlcr -oxtabs -onocr -onlret
    cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
    discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
    ^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
    min     quit    reprint start   status  stop    susp    time    werase
    1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W
    

    You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key that we previously represented with a \C .

    If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

    If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.

    If you are in the same situation, type:

    $ stty stop undef
    

    Now, if you re-run stty -e , the last two lines might look like this:

    [...]
    min     quit    reprint start   status  stop    susp    time    werase
    1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W
    

    where the stop entry now has <undef> underneath it.

    Strangely, for me C-r is also bound to 'reprint' above ( ^R ).

    But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.

    While we are looking at this table:

    discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
    ^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
    min     quit    reprint start   status  stop    susp    time    werase
    1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W
    

    it's worth noting a few other key bindings that are used regularly.

    First, one you may well already be familiar with is \C-c , which interrupts a program, terminating it:

    $ sleep 99
    [[Hit \C-c]]
    ^C
    $
    

    Similarly, \C-z suspends a program, allowing you to 'foreground' it again and continue with the fg builtin.

    $ sleep 10
    [[ Hit \C-z]]
    ^Z
    [1]+  Stopped                 sleep 10
    $ fg
    sleep 10
    

    \C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

    Finally, \C-w deletes the word before the cursor

    These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.

    Daz April 29, 2019 at 11:15 pm

    Hi Ian,

    What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2

    stty -e
    stty: invalid argument '-e'
    Try 'stty –help' for more information. Reply

    Leon May 14, 2019 at 5:12 am

    `stty -a` works for me (Ubuntu 14)

    yachris May 16, 2019 at 4:40 pm

    You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.

    AriSweedler May 17, 2019 at 4:50 am

    I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!

    I do not know exactly when or how the inputrc is read.

    Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .

    Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .

    [Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin by Valentin Bajrami

    The first part is at Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more - Enable Sysadmin
    Jul 02, 2020 | www.redhat.com
    These tips and tricks will make your Linux command line experience easier and more efficient.

    Image

    Photo by Jonathan Meyer from Pexels

    More Linux resources

    This blog post is the second of two covering some practical tips and tricks to get the most out of the Bash shell. In part one , I covered history, last argument, working with files and directories, reading files, and Bash functions. In this segment, I cover shell variables, find, file descriptors, and remote operations.

    Use shell variables

    The Bash variables are set by the shell when invoked. Why would I use hostname when I can use $HOSTNAME, or why would I use whoami when I can use $USER? Bash variables are very fast and do not require external applications.

    These are a few frequently-used variables:

    $PATH
    $HOME
    $USER
    $HOSTNAME
    $PS1
    ..
    $PS4
    

    Use the echo command to expand variables. For example, the $PATH shell variable can be expanded by running:

    $> echo $PATH
    

    [ Download now: A sysadmin's guide to Bash scripting . ]

    Use the find command

    The find command is probably one of the most used tools within the Linux operating system. It is extremely useful in interactive shells. It is also used in scripts. With find I can list files older or newer than a specific date, delete them based on that date, change permissions of files or directories, and so on.

    Let's get more familiar with this command.

    To list files older than 30 days, I simply run:

    $> find /tmp -type f -mtime +30
    

    To delete files older than 30 days, run:

    $> find /tmp -type f -mtime +30 -exec rm -rf {} \;
    

    or

    $> find /tmp -type f -mtime +30 -exec rm -rf {} +
    

    While the above commands will delete files older than 30 days, as written, they fork the rm command each time they find a file. This search can be written more efficiently by using xargs :

    $> find /tmp -name '*.tmp' -exec printf '%s\0' {} \; | xargs -0 rm
    

    I can use find to list sha256sum files only by running:

    $> find . -type f -exec sha256sum {} +
    

    And now to search for and get rid of duplicate .jpg files:

    $> find . -type f -name '*.jpg' -exec sha256sum {} + | sort -uk1,1
    
    Reference file descriptors

    In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

    Code Meaning Location Description
    0 Standard input /dev/stdin Keyboard, file, or some stream
    1 Standard output /dev/stdout Monitor, terminal, display
    2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

    Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

    $> ls foo/ bar/
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

    $> ls foo/ bar/ 2>/dev/null
    foo/:
    file1
    

    It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

    $> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
    foo:
    file1
    

    Then:

    $> cat ls_out_file
    foo:
    file1
    

    The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

    $> ls foo/ bar/ >to_stdout 2>/dev/null
    $> cat to_stdout
    foo/:
    file1
    

    The following command sends stdout and stderr to the same file:

    $> ls foo/ bar/ >mixed_output 2>&1
    $> cat mixed_output
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    This is what happened in the last example, where stdout and stderr were redirected to the same file:

        ls foo/ bar/ >mixed_output 2>&1
                 |          |
                 |          Redirect stderr to where stdout is sent
                 |                                                        
                 stdout is sent to mixed_output
    

    Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

    $> ls foo/ bar/ &>mixed_output
    

    Here is a more complex redirection:

    exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-
    

    This is what occurs:

    Often it is handy to group commands, and then send the Standard Output to a single file. For example:

    $> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
    Hello world
    

    As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

    Execute remote operations

    I use Telnet, netcat, Nmap, and other tools to test whether a remote service is up and whether I can connect to it. These tools are handy, but they aren't installed by default on all systems.

    Fortunately, there is a simple way to test a connection without using external tools. To see if a remote server is running a web, database, SSH, or any other service, run:

    $> timeout 3 bash -c '</dev/tcp/remote_server/remote_port' || echo "Failed to connect"
    

    For example, to see if serverA is running the MariaDB service:

    $> timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed to connect"
    

    If the connection fails, the Failed to connect message is displayed on your screen.

    Assume serverA is behind a firewall/NAT. I want to see if the firewall is configured to allow a database connection to serverA , but I haven't installed a database server yet. To emulate a database port (or any other port), I can use the following:

    [serverA ~]# nc -l 3306
    

    On clientA , run:

    [clientA ~]# timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed"
    

    While I am discussing remote connections, what about running commands on a remote server over SSH? I can use the following command:

    $> ssh remotehost <<EOF  # Press the Enter key here
    > ls /etc
    EOF
    

    This command runs ls /etc on the remote host.

    I can also execute a local script on the remote host without having to copy the script over to the remote server. One way is to enter:

    $> ssh remote_host 'bash -s' < local_script
    

    Another example is to pass environment variables locally to the remote server and terminate the session after execution.

    $> exec ssh remote_host ARG1=FOO ARG2=BAR 'bash -s' <<'EOF'
    > printf %s\\n "$ARG1" "$ARG2"
    > EOF
    Password:
    FOO
    BAR
    Connection to remote_host closed.
    

    There are many other complex actions I can perform on the remote host.

    Wrap up

    There is certainly more to Bash than I was able to cover in this two-part blog post. I am sharing what I know and what I deal with daily. The idea is to familiarize you with a few techniques that could make your work less error-prone and more fun.

    [ Want to test your sysadmin skills? Take a skills assessment today. ] Valentin Bajrami

    Valentin is a system engineer with more than six years of experience in networking, storage, high-performing clusters, and automation. He is involved in different open source projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators. More about me

    [Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin

    Notable quotes:
    "... No such file or directory ..."
    Jul 07, 2020 | www.redhat.com

    Reference file descriptors

    In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

    Code Meaning Location Description
    0 Standard input /dev/stdin Keyboard, file, or some stream
    1 Standard output /dev/stdout Monitor, terminal, display
    2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

    Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

    $> ls foo/ bar/
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

    $> ls foo/ bar/ 2>/dev/null
    foo/:
    file1
    

    It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

    $> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
    foo:
    file1
    

    Then:

    $> cat ls_out_file
    foo:
    file1
    

    The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

    $> ls foo/ bar/ >to_stdout 2>/dev/null
    $> cat to_stdout
    foo/:
    file1
    

    The following command sends stdout and stderr to the same file:

    $> ls foo/ bar/ >mixed_output 2>&1
    $> cat mixed_output
    ls: cannot access 'bar/': No such file or directory
    foo/:
    file1
    

    This is what happened in the last example, where stdout and stderr were redirected to the same file:

        ls foo/ bar/ >mixed_output 2>&1
                 |          |
                 |          Redirect stderr to where stdout is sent
                 |                                                        
                 stdout is sent to mixed_output
    

    Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

    $> ls foo/ bar/ &>mixed_output
    

    Here is a more complex redirection:

    exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-
    

    This is what occurs:

    Often it is handy to group commands, and then send the Standard Output to a single file. For example:

    $> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
    Hello world
    

    As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

    [Jul 06, 2020] Take Responsibility for Training

    Jul 06, 2020 | zwischenzugs.com

    There's a great quote from Andy Grove, founder of Intel about training:

      Training is the manager's job. Training is the highest leverage activity a manager can do to increase the output of an organization. If a manager spends 12 hours preparing training for 10 team members that increases their output by 1% on average, the result is 200 hours of increased output from the 10 employees (each works about 2000 hours a year). Don't leave training to outsiders, do it yourself.

    And training isn't just about being in a room and explaining things to people – it's about getting in the field and showing people how to respond to problems, how to think about things, and where they need to go next. The point is: take ownership of it.

    I personally trained people in things like Git and Docker and basic programming whenever I got the chance to. This can demystify these skills and empower your staff to go further. It also sends a message about what's important – if the boss spends time on triage, training and hiring, then they must be important.

    [Jul 06, 2020] The Runbooks Project

    Jul 06, 2020 | zwischenzugs.com

    I already talked about this in the previous post , but every subsequent attempt I made to get a practice of writing runbooks going was hard going. No-one ever argues with the logic of efficiency and saved time, but when it comes to putting the barn up, pretty much everyone is too busy with something else to help.

    Looking at the history of these kind of efforts , it seems that people need to be forced – against their own natures – into following these best practices that invest current effort for future operational benefit.

    Examples from The Checklist Manifesto included:

    In the case of my previous post, it was frustration for me at being on-call that led me to spend months writing up runbooks. The main motivation that kept me going was that it would be (as a minimal positive outcome) for my own benefit . This intrinsic motivation got the ball rolling, and the effort was then sustained and developed by the other three more process-oriented factors.

    There's a commonly-seen pattern here:

    If you crack how to do that reliably, then you're going to be pretty good at building businesses.

    It Doesn't Always Help

    That wasn't the only experience I had trying to spread what I thought was good practice. In other contexts, I learned, the application of these methods was unhelpful.

    In my next job, I worked on a new and centralised fast-changing system in a large org, and tried to write helpful docs to avoid repeating solving the same issues over and over. Aside from the authority and 'critical mass' problems outlined above, I hit a further one: the system was changing too fast for the learnings to be that useful. Bugs were being fixed quickly (putting my docs out of date similarly quickly) and new functionality was being added, leading to substantial wasted effort and reduced benefit.

    Discussing this with a friend, I was pointed at a framework that already existed called Cynefin that had already thought about classifying these differences of context, and what was an appropriate response to them. Through that lens, my mistake had been to try and impose what might be best practice in a 'Complicated'/'Clear' context to a context that was 'Chaotic'/'Complex'. 'Chaotic' situations are too novel or under-explored to be susceptible to standard processes. Fast action and equally fast evaluation of system response is required to build up practical experience and prepare the way for later stabilisation.

    'Why Don't You Just Automate It?'

    I get this a lot. It's an argument that gets my goat, for several reasons.

    Runbooks are a useful first step to an automated solution

    If a runbook is mature and covers its ground well, it serves as an almost perfect design document for any subsequent automation solution. So it's in itself a useful precursor to automation for any non-trivial problem.

    Automation is difficult and expensive

    It is never free. It requires maintenance. There are always corner cases that you may not have considered. It's much easier to write: 'go upstairs' than build a robot that climbs stairs .

    Automation tends to be context-specific

    If you have a wide-ranging set of contexts for your problem space, then a runbook provides the flexibility to applied in any of these contexts when paired with a human mind. For example: your shell script solution will need to reliably cater for all these contexts to be useful; not every org can use your Ansible recipe; not every network can access the internet.

    Automation is not always practicable

    In many situations, changing or releasing software to automate a solution is outside your control or influence .

    A Public Runbooks Project

    All my thoughts on this subject so far have been predicated on writing proprietary runbooks that are consumed and maintained within an organisation.

    What I never considered was gaining the critical mass needed by open sourcing runbooks, and asking others to donate theirs so we can all benefit from each others' experiences.

    So we at Container Solutions have decided to open source the runbooks we have built up that are generally applicable to the community. They are growing all the time, and we will continue to add to them.

    Call for Runbooks

    We can't do this alone, so are asking for your help!

    However you want to help, you can either raise a PR or an issue , or contact me directly .

    [Jul 06, 2020] Things I Learned Managing Site Reliability (2017) - Hacker News

    Jul 06, 2020 | news.ycombinator.com
    Things I Learned Managing Site Reliability (2017) ( zwischenzugs.com )
    109 points by bshanks on Feb 26, 2018 | hide | past | favorite | 13 comments
    foo101 on Feb 26, 2018 [–]
    I really like the point about runbooks/playbooks.

    > We ended up embedding these dashboard within Confluence runbooks/playbooks followed by diagnosing/triaging, resolving, and escalation information. We also ended up associating these runbooks/playbooks with the alerts and had the links outputted into the operational chat along with the alert in question so people could easily follow it back.

    When I used to work for Amazon, as a developer, I was required to write a playbook for every microservice I developed. The playbook had to be so detailed that, in theory, any site reliability engineer, who has no knowledge of the service should be able to read the playbook and perform the following activities:

    - Understand what the service does.

    - Learn all the curl commands to run to test each service component in isolation and see which ones are not behaving as expected.

    - Learn how to connect to the actual physical/virtual/cloud systems that keep the service running.

    - Learn which log files to check for evidence of problems.

    - Learn which configuration files to edit.

    - Learn how to restart the service.

    - Learn how to rollback the service to an earlier known good version.

    - Learn resolution to common issues seen earlier.

    - Perform a checklist of activities to be performed to ensure all components are in good health.

    - Find out which development team of ours to page if the issue remains unresolved.

    It took a lot of documentation and excellent organization of such documentation to keep the services up and running.

    twic on Feb 26, 2018 [–]
    A far-out old employer of mine decided that their standard format for alerts, sent by applications to the central monitoring system, would include a field for a URL pointing to some relevant documentation.

    I think this was mostly pushed through by sysadmins annoyed at getting alerts from new applications that didn't mean anything to them.

    peterwwillis on Feb 26, 2018 [–]
    When you get an alert, you have to first understand the alert, and then you have to figure out what to do about it. The majority of alerts, when people don't craft them according to a standard/policy, look like this:
      Subject: Disk usage high
      Priority: High
      Message: 
        There is a problem in cluster ABC.
        Disk utilization above 90%.
        Host 1.2.3.4.
    
    It's a pain in the ass to go figure out what is actually affected, why it's happening, and track down some kind of runbook that describes how to fix this specific case (because it may vary from customer to customer, not to mention project to project). This is usually the state of alerts until a single person (who isn't a manager; managers hate cleaning up inefficiencies) gets so sick and tired of it that they take the weekend to overhaul one alert at a time to provide better insight as to what is going on and how to fix it. Any attempt to improve docs for those alerts are never updated by anyone but this lone individual.

    Providing a link to a runbook makes resolving issues a lot faster. It's even better if the link is to a Wiki page, so you can edit it if the runbook isn't up to date.

    adrianratnapala on Feb 26, 2018 [–]
    So did the system work, and how did it work?

    Basically you are saying you were required to be really diligent about the playbooks and put effort in to get them right.

    Did people really put that effort in? Was it worth it? If so, what elments of the culture/organisation/process made people do the right thing when it is so much easier for busy people to get sloppy?

    foo101 on Feb 27, 2018 [–]
    The answer is "Yes" to all of your questions.

    Regarding the question about culture, yes, busy people often get sloppy. But when a P1 alert comes because a site reliability engineer could not resolve the issue by following the playbook, it looks bad on the team and a lot of questions are asked by all affected stakeholders (when a service goes down in Amazon it may affect multiple other teams) about why the playbook was deficient. Nobody wants to be in a situation like this. In fact, no developer wants to be woken up at 2 a.m. because a service went down and the issue could not be fixed by the on-call SRE. So it is in their interest to write good and detailed playbooks.

    zwischenzug on Feb 26, 2018 [–]
    That sounds like a great process there. It staggers me how much people a) underestimate the investment required to maintain that kind of documentation, and b) underestimate how much value it brings. It's like brushing your teeth.
    peterwwillis on Feb 26, 2018 [–]
    > It's far more important to have a ticketing system that functions reliably and supports your processes than the other way round.

    The most efficient ticketing systems I have ever seen were heavily customized in-house. When they moved to a completely different product, productivity in addressing tickets plummeted. They stopped generating tickets to deal with it.

    > After process, documentation is the most important thing, and the two are intimately related.

    If you have two people who are constantly on call to address issues because nobody else knows how to deal with it, you are a victim of a lack of documentation. Even a monkey can repair a space shuttle if they have a good manual.

    I partly rely on incident reports and issues as part of my documentation. Sometimes you will get an issue like "disk filling up", and maybe someone will troubleshoot it and resolve it with a summary comment of "cleaned up free space in X process". Instead of making that the end of it, create a new issue which describes the problem and steps to resolve in detail. Update the issue over time as necessary. Add a tag to the issue called 'runbook'. Then mark related issues as duplicates of this one issue. It's kind of horrible, but it seamlessly integrates runbooks with your issue tracking.

    mdaniel on Feb 26, 2018 [–]
    Even a monkey can repair a space shuttle if they have a good manual

    I would like to point out that the dependency chain for repairing the space shuttle (or worse: microservices) can turn the need for understanding (or authoring) one document into understanding 12+ documents, or run the risk of making a document into a "wall of text," copy-paste hell, and/or out-of-date.

    Capturing the contextual knowledge required to make an administration task straight-forward can easily turn the forest into the trees.

    I would almost rather automate the troubleshooting steps than to have to write sufficiently specific English to express what one should do in given situations, with the caveat that such automation takes longer to write than said automation.

    zwischenzug on Feb 26, 2018 [–]
    Yeah, that's exactly what we found - we created a JIRA project called 'DOCS', which made search trivial:

    'docs disk filling up'

    tabtab on Feb 26, 2018 [–]
    It's pretty much organizing 101: study situation, plan, track well, document well but in a practical sense (write docs that people will actually read), get feedback from everybody, learn from your mistakes, admit your mistakes, and make the system and process better going forward.
    willejs on Feb 26, 2018 [–]
    This was posted a while back, and you can see the original thread with more comments here https://news.ycombinator.com/item?id=14031180
    stareatgoats on Feb 26, 2018 [–]
    I may be out of touch with current affairs, but I don't think I've encountered a single workplace where documentation has worked. Sometimes because because people were only hired to put out fires, sometimes because there was no sufficiently customized ticketing system, sometimes because they simply didn't know how to abstract their tasks into well written documents.

    And in many cases because people thought they might be out of a job if they put their solutions in print. I'm guessing managers still need to counter those tendencies actively if they want documentation to happen. Plenty of good pointers in this article, I found.

    commandlinefan on Feb 26, 2018 [–]
    > I was trusted (culture, again!)

    This is the kicker - and the rarity. I don't think it's all trust, though. When your boss already knows going into Q1 that he's going to be fired in Q2 if he doesn't get 10 specific (and myopically short-term) agenda items addressed, it doesn't matter how much he trusts you, you're going to be focusing on only the things that have the appearance of ROI after a few hours of work, no matter how inefficient they are in the long term.


    [Jul 06, 2020] BASH Shell Redirect stderr To stdout ( redirect stderr to a File ) by Vivek Gite

    Jun 06, 2020 | www.cyberciti.biz

    ... ... ...

    Redirecting the standard error stream to a file

    The following will redirect program error message to a file called error.log:
    $ program-name 2> error.log
    $ command1 2> error.log

    For example, use the grep command for recursive search in the $HOME directory and redirect all errors (stderr) to a file name grep-errors.txt as follows:
    $ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
    $ cat /tmp/grep-errors.txt

    Sample outputs:

    grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address
    grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory
    grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory
    grep: /home/vivek/.byobu/.ssh-agent: No such device or address
    
    Redirecting the standard error (stderr) and stdout to file

    Use the following syntax:
    $ command-name &>file
    We can als use the following syntax:
    $ command > file-name 2>&1
    We can write both stderr and stdout to two different files too. Let us try out our previous grep command example:
    $ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
    $ cat /tmp/grep-outputs.txt

    Redirecting stderr to stdout to a file or another command

    Here is another useful example where both stderr and stdout sent to the more command instead of a file:
    # find /usr/home -name .profile 2>&1 | more

    Redirect stderr to stdout

    Use the command as follows:
    $ command-name 2>&1
    $ command-name > file.txt 2>&1
    ## bash only ##
    $ command2 &> filename
    $ sudo find / -type f -iname ".env" &> /tmp/search.txt

    Redirection takes from left to right. Hence, order matters. For example:
    command-name 2>&1 > file.txt ## wrong ##
    command-name > file.txt 2>&1 ## correct ##

    How to redirect stderr to stdout in Bash script

    A sample shell script used to update VM when created in the AWS/Linode server:

    #!/usr/bin/env bash
    # Author - nixCraft under GPL v2.x+
    # Debian/Ubuntu Linux script for EC2 automation on first boot
    # ------------------------------------------------------------
    # My log file - Save stdout to $LOGFILE
    LOGFILE="/root/logs.txt"
     
    # My error file - Save stderr to $ERRFILE
    ERRFILE="/root/errors.txt"
     
    # Start it 
    printf "Starting update process ... \n" 1>"${LOGFILE}"
     
    # All errors should go to error file 
    apt-get -y update 2>"${ERRFILE}"
    apt-get -y upgrade 2>>"${ERRFILE}"
    printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}"
    shutdown -r now 2>>"${ERRFILE}"
    

    Our last example uses the exec command and FDs along with trap and custom bash functions:

    #!/bin/bash
    # Send both stdout/stderr to a /root/aws-ec2-debian.log file
    # Works with Ubuntu Linux too.
    # Use exec for FD and trap it using the trap
    # See bash man page for more info
    # Author:  nixCraft under GPL v2.x+
    # ---------------------------------------------
    exec 3>&1 4>&2
    trap 'exec 2>&4 1>&3' 0 1 2 3
    exec 1>/root/aws-ec2-debian.log 2>&1
     
    # log message
    log(){
            local m="$@"
            echo ""
            echo "*** ${m} ***"
            echo ""
    }
     
    log "$(date) @ $(hostname)"
    ## Install stuff ##
    log "Updating up all packages"
    export DEBIAN_FRONTEND=noninteractive
    apt-get -y clean
    apt-get -y update
    apt-get -y upgrade
    apt-get -y --purge autoremove
     
    ## Update sshd config ##
    log "Configuring sshd_config"
    sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g'  /etc/ssh/sshd_config
     
    ## Hide process from other users ##
    log "Update /proc/fstab to hide process from each other"
    echo 'proc    /proc    proc    defaults,nosuid,nodev,noexec,relatime,hidepid=2     0     0' >> /etc/fstab
     
    ## Install LXD and stuff ##
    log "Installing LXD/wireguard/vnstat and other packages on this box"
    apt-get -y install lxd wireguard vnstat expect mariadb-server 
     
    log "Configuring mysql with mysql_secure_installation"
    SECURE_MYSQL_EXEC=$(expect -c "
    set timeout 10
    spawn mysql_secure_installation
    expect \"Enter current password for root (enter for none):\"
    send \"$MYSQL\r\"
    expect \"Change the root password?\"
    send \"n\r\"
    expect \"Remove anonymous users?\"
    send \"y\r\"
    expect \"Disallow root login remotely?\"
    send \"y\r\"
    expect \"Remove test database and access to it?\"
    send \"y\r\"
    expect \"Reload privilege tables now?\"
    send \"y\r\"
    expect eof
    ")
     
    # log to file #
    echo "   $SECURE_MYSQL_EXEC   "
    # We no longer need expect 
    apt-get -y remove expect
     
    # Reboot the EC2 VM
    log "END: Rebooting requested @ $(date) by $(hostname)"
    reboot
    
    WANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?

    Try the tee command as follows:
    command1 2>&1 | tee filename
    Here is how to use it insider shell script too:

    #!/usr/bin/env bash
    {
       command1
       command2 | do_something
    } 2>&1 | tee /tmp/outputs.log
    
    Conclusion

    In this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr. We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash man page here :

    Operator Description Examples
    command>filename Redirect stdout to file "filename." date > output.txt
    command>>filename Redirect and append stdout to file "filename." ls -l >> dirs.txt
    command 2>filename Redirect stderr to file "filename." du -ch /snaps/ 2> space.txt
    command 2>>filename Redirect and append stderr to file "filename." awk '{ print $4}' input.txt 2>> data.txt
    command &>filename
    command >filename 2>&1
    Redirect both stdout and stderr to file "filename." grep -R foo /etc/ &>out.txt
    command &>>filename
    command >>filename 2>&1
    Redirect both stdout and stderr append to file "filename." whois domain &>>log.txt

    Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . RELATED TUTORIALS

    1. Matt Kukowski says: January 29, 2014 at 6:33 pm

      In pre-bash4 days you HAD to do it this way:

      cat file > file.txt 2>&1

      now with bash 4 and greater versions you can still do it the old way but

      cat file &> file.txt

      The above is bash4+ some OLD distros may use prebash4 but I think they are alllong gone by now. Just something to keep in mind.

    2. iamfrankenstein says: June 12, 2014 at 8:35 pm

      I really love: " command2>&1 | tee logfile.txt "

      because tee log's everything and prints to stdout . So you stil get to see everything! You can even combine sudo to downgrade to a log user account and add date's subject and store it in a default log directory :)

    [Jul 05, 2020] Learn Bash the Hard Way by Ian Miell [Leanpub PDF-iPad-Kindle]

    Highly recommended!
    Jul 05, 2020 | leanpub.com


    skeptic
    5.0 out of 5 stars Reviewed in the United States on July 2, 2020

    A short (160 pages) book that covers some difficult aspects of bash needed to customize bash env.

    Whether we want it or not, bash is the shell you face in Linux, and unfortunately, it is often misunderstood and misused. Issues related to creating your bash environment are not well addressed in existing books. This book fills the gap.

    Few authors understand that bash is a complex, non-orthogonal language operating in a complex Linux environment. To make things worse, bash is an evolution of Unix shell and is a rather old language with warts and all. Using it properly as a programming language requires a serious study, not just an introduction to the basic concepts. Even issues related to customization of dotfiles are far from trivial, and you need to know quite a bit to do it properly.

    At the same time, proper customization of bash environment does increase your productivity (or at least lessens the frustration of using Linux on the command line ;-)

    The author covered the most important concepts related to this task, such as bash history, functions, variables, environment inheritance, etc. It is really sad to watch like the majorly of Linux users do not use these opportunities and forever remain on the "level zero" using default dotfiles with bare minimum customization.

    This book contains some valuable tips even for a seasoned sysadmin (for example, the use of !& in pipes), and as such, is worth at least double of suggested price. It allows you intelligently customize your bash environment after reading just 160 pages and doing the suggested exercises.

    Contents:

    [Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell

    Highly recommended!
    Notable quotes:
    "... Material here based on material from my book Learn Bash the Hard Way . Free preview available here . ..."
    "... natively in bash ..."
    Jul 04, 2020 | zwischenzugs.com

    Here are some tips that might help you be more productive with bash.

    1) ^x^y^

    A gem I use all the time.

    Ever typed anything like this?

    $ grp somestring somefile
    -bash: grp: command not found
    

    Sigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.

    Or do this:

    $ ^rp^rep^
    grep 'somestring' somefile
    $
    

    One subtlety you may want to note though is:

    $ grp rp somefile
    $ ^rp^rep^
    $ grep rp somefile
    

    If you wanted rep to be searched for, then you'll need to dig into the man page and use a more powerful history command:

    $ grp rp somefile
    $ !!:gs/rp/rep
    grep rep somefile
    $
    

    ... ... ...


    Material here based on material from my book
    Learn Bash the Hard Way .
    Free preview available here .


    3) shopt vs set

    This one bothered me for a while.

    What's the difference between set and shopt ?

    set s we saw before , but shopt s look very similar. Just inputting shopt shows a bunch of options:

    $ shopt
    cdable_vars    off
    cdspell        on
    checkhash      off
    checkwinsize   on
    cmdhist        on
    compat31       off
    dotglob        off
    

    I found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.

    4) Here Docs and Here Strings

    'Here docs' are files created inline in the shell.

    The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

    Type this:

    $ cat > afile << SOMEENDSTRING
    > here is a doc
    > it has three lines
    > SOMEENDSTRING alone on a line will save the doc
    > SOMEENDSTRING
    $ cat afile
    here is a doc
    it has three lines
    SOMEENDSTRING alone on a line will save the doc

    Notice that:

    Lesser known is the 'here string':

    $ cat > asd <<< 'This file has one line'
    
    5) String Variable Manipulation

    You may have written code like this before, where you use tools like sed to manipulate strings:

    $ VAR='HEADERMy voice is my passwordFOOTER'
    $ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
    $ echo $PASS
    

    But you may not be aware that this is possible natively in bash .

    This means that you can dispense with lots of sed and awk shenanigans.

    One way to rewrite the above is:

    $ VAR='HEADERMy voice is my passwordFOOTER'
    $ PASS="${VAR#HEADER}"
    $ PASS="${PASS%FOOTER}"
    $ echo $PASS

    The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .

    If you want to use glob patterns that are greedy (see globbing here ) then you double up:

    $ VAR='HEADERMy voice is my passwordFOOTER'
    $ echo ${VAR##HEADER*}
    $ echo ${VAR%%*FOOTER}
    
    6) ​Variable Defaults

    These are very handy when you're knocking up scripts quickly.

    If you have a variable that's not set, you can 'default' them by using this. Create a file called default.sh with these contents

    #!/bin/bash
    FIRST_ARG="${1:-no_first_arg}"
    SECOND_ARG="${2:-no_second_arg}"
    THIRD_ARG="${3:-no_third_arg}"
    echo ${FIRST_ARG}
    echo ${SECOND_ARG}
    echo ${THIRD_ARG}

    Now run chmod +x default.sh and run the script with ./default.sh first second .

    Observer how the third argument's default has been assigned, but not the first two.

    You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.

    7) Traps

    The trap built-in can be used to 'catch' when a signal is sent to your script.

    Here's an example I use in my own cheapci script:

    function cleanup() {
        rm -rf "${BUILD_DIR}"
        rm -f "${LOCK_FILE}"
        # get rid of /tmp detritus, leaving anything accessed 2 days ago+
        find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
        echo "cleanup done"                                                                                                                          
    } 
    trap cleanup TERM INT QUIT

    Any attempt to CTRL-C , CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

    Be aware:

    • Trap logic can get very tricky (eg handling signal race conditions)
    • The KILL signal can't be trapped in this way

    But mostly I've used this for 'cleanups' like the above, which serve their purpose.

    8) Shell Variables

    It's well worth getting to know the standard shell variables available to you . Here are some of my favourites:

    RANDOM

    Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

    $ echo ${RANDOM}
    16313
    $ # Not enough digits?
    $ echo ${RANDOM}${RANDOM}
    113610703
    $ NEWFILE=/tmp/newfile_${RANDOM}
    $ touch $NEWFILE
    
    REPLY

    No need to give a variable name for read

    $ read
    my input
    $ echo ${REPLY}
    LINENO and SECONDS

    Handy for debugging

    $ echo ${LINENO}
    115
    $ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
    174380
    174381
    116

    Note that there are two 'lines' above, even though you used ; to separate the commands.

    TMOUT

    You can timeout reads, which can be really handy in some scripts

    #!/bin/bash
    TMOUT=5
    echo You have 5 seconds to respond...
    read
    echo ${REPLY:-noreply}

    ... ... ...

    10) Associative Arrays

    Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).

    What I didn't know until I read up on it was that you can have associative arrays in bash.

    Type this out for a demo:

    $ declare -A MYAA=([one]=1 [two]=2 [three]=3)
    $ MYAA[one]="1"
    $ MYAA[two]="2"
    $ echo $MYAA
    $ echo ${MYAA[one]}
    $ MYAA[one]="1"
    $ WANT=two
    $ echo ${MYAA[$WANT]}
    

    Note that this is only available in bashes 4.x+.

    ... ... ...

    [Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell

    Highly recommended!
    Notable quotes:
    "... The "last argument" one: !$ ..."
    "... The " n th argument" one: !:2 ..."
    "... The "all the arguments": !* ..."
    "... The "last but n " : !-2:$ ..."
    "... The "get me the folder" one: !$:h ..."
    "... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
    "... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
    Oct 02, 2019 | opensource.com

    7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now

    Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.

    So most of them were never committed to memory.

    More on Bash This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).

    When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

    1. The "last argument" one: !$

    If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.

    Consider this scenario:

    $ mv / path / to / wrongfile / some / other / place
    mv: cannot stat '/path/to/wrongfile' : No such file or directory

    Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

    You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:

    $ mv / path / to / rightfile ! $
    mv / path / to / rightfile / some / other / place

    and the command will work.

    There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.

    2. The " n th argument" one: !:2

    Ever done anything like this?

    $ tar -cvf afolder afolder.tar
    tar: failed to open

    Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.

    tar_2x.png

    When you mix up arguments like that, you can run:

    $ ! : 0 ! : 1 ! : 3 ! : 2
    tar -cvf afolder.tar afolder

    and your reputation will be saved.

    The last command's items are zero-indexed and can be substituted in with the number after the !: .

    Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.

    3. The "all the arguments": !*

    Imagine I run a command like:

    $ grep '(ping|pong)' afile
    

    The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .

    I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).

    $ egrep ! : 1 -$
    egrep '(ping|pong)' afile
    ping

    You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).

    4. The "last but n " : !-2:$

    The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.

    For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:

    $ mv / path / to / wrongfile / some / other / place
    mv: cannot stat '/path/to/wrongfile' : No such file or directory
    $ ls / path / to /
    rightfile

    I can no longer use the !$ shortcut.

    In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

    $ mv / path / to / rightfile ! - 2 :$
    mv / path / to / rightfile / some / other / place

    Again, once you learn it, you may be surprised at how often you need it.

    5. The "get me the folder" one: !$:h

    This one looks less promising on the face of it, but I use it dozens of times daily.

    Imagine I run a command like this:

    $ tar -cvf system.tar / etc / system
    tar: / etc / system: Cannot stat: No such file or directory
    tar: Error exit delayed from previous errors.

    The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.

    I can do this at a stroke with:

    $ cd ! $:h
    cd / etc

    This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."

    6. The "the current line" one: !#:1

    For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:

    $ cp / path / to / some / file ! #:1.bak
    cp / path / to / some / file / path / to / some / file.bak

    but once under the fingers, it can be a very quick alternative to

    7. The "search and replace" one: !!:gs

    This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.

    Say I want to tell the world that my s key does not work and outputs f instead:

    $ echo my f key doef not work
    my f key doef not work

    Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:

    $ !! :gs / f / s /
    echo my s key does not work
    my s key does not work

    It doesn't work only on single characters; I can replace words or sentences, too:

    $ !! :gs / does / did /
    echo my s key did not work
    my s key did not work Test them out

    Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

    $ ping ! #:0:gs/i/o
    $ vi / tmp /! : 0 .txt
    $ ls ! $:h
    $ cd ! - 2 :h
    $ touch ! $! - 3 :$ !! ! $.txt
    $ cat ! : 1 -$ Conclusion

    Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.

    If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .


    This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.

    Orr, August 25, 2019 at 10:39 pm

    BTW you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
    Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!

    Dima August 26, 2019 at 7:40 am

    Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.

    cbarrick on 03 Oct 2019

    Your "current line" example is too contrived. Your example is copying to a backup like this:

    $ cp /path/to/some/file !#:1.bak

    But a better way to write that is with filename generation:

    $ cp /path/to/some/file{,.bak}

    That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.

    Darryl Martin August 26, 2019 at 4:41 pm

    I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.

    A couple additions I'm sure you know.

    1. I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
    2. I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.

    Ricardo J. Barberis on 06 Oct 2019

    The problem with bash's history shorcuts for me is... that I never had the need to learn them.

    Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).

    Examples:

    Ctrl+R for a Reverse search
    Ctrl+A to move to the begnining of the line (Home key also)
    Ctrl+E to move to the End of the line (End key also)
    Ctrl+K to Kill (delete) text from the cursor to the end of the line
    Ctrl+U to kill text from the cursor to the beginning of the line
    Alt+F to move Forward one word (Ctrl+Right arrow also)
    Alt+B to move Backward one word (Ctrl+Left arrow also)
    etc.

    YMMV of course.

    [Jul 02, 2020] Some Relatively Obscure Bash Tips zwischenzugs

    Jul 02, 2020 | zwischenzugs.com

    2) |&

    You may already be familiar with 2>&1 , which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:

    if doesnotexist |& grep 'command not found' >/dev/null
    then
      echo oops
    fi
    
    3) $''

    This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will grep through files looking for UK currency ('£') signs in hexadecimal recursively:

    grep -r $'\xc2\xa3' *
    

    You can also use octal:

    grep -r $'\302\243' *
    
    4) HISTIGNORE

    If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.

    This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:

    HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"
    

    You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.

    5) fc

    If readline key bindings aren't under your fingers, then this one may come in handy.

    It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.

    6) ((i++))

    If you can't be bothered with faffing around with variables in bash with the $[] construct, you can use the C-style compound command.

    So, instead of:

    A=1
    A=$[$A+1]
    echo $A
    

    you can do:

    A=1
    ((A++))
    echo $A
    

    which, especially with more complex calculations, might be easier on the eye.

    7) caller

    Another builtin bash command, caller gives context about the context of your shell's

    SHLVL is a related shell variable which gives the level of depth of the calling stack.

    This can be used to create stack traces for more complex bash scripts.

    Here's a die function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:

    #!/bin/bash
    die() {
      local frame=0
      ((FRAMELEVEL=SHLVL - frame))
      echo -n "${FRAMELEVEL}: "
      while caller $frame; do
        ((frame++));
        ((FRAMELEVEL=SHLVL - frame))
        if [[ ${FRAMELEVEL} -gt -1 ]]
        then
          echo -n "${FRAMELEVEL}: "
        fi
      done
      echo "$*"
      exit 1
    }
    

    which outputs:

    3: 17 f1 ./caller.sh
    2: 18 f2 ./caller.sh
    1: 19 f3 ./caller.sh
    0: 20 main ./caller.sh
    *** an error occured ***
    
    8) /dev/tcp/host/port

    This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).

    Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.

    This snippet, for example, makes a web request to a site and returns the output.

    exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80
    echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9
    cat <&9
    

    The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port 80 for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.

    Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.

    9) Co-processes

    Since version 4 of bash it has offered the capability to run named coprocesses.

    It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:

    coproc testproc (
      i=1
      while true
      do
        echo "iteration:${i}"
        ((i++))
        read -r aline
        echo "${aline}"
      done
    )
    

    This sets up the coprocess as a subshell with the name testproc .

    Within the subshell, there's a never-ending while loop that counts its own iterations with the i variable. It outputs two lines: the iteration number, and a line read in from standard input.

    After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:

    echo "${testproc[@]}"
    

    in my terminal outputs:

    63 60
    

    Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:

    echo "${testproc_PID}"
    

    You can now input data to the standard input of this coprocess at will like this:

    echo input1 >&"${testproc[1]}"
    

    In this case, the command resolves to: echo input1 >&60 , and the >&[INTEGER] construct ensures the redirection goes to the coprocess's standard input.

    Now you can read the output of the coprocess's two lines in a similar way, like this:

    read -r output1a <&"${testproc[0]}"
    read -r output1b <&"${testproc[0]}"
    

    You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.

    Here's a complete listing for those who want to cut and paste:

    !/bin/bash
    coproc testproc (
      i=1
      while true
      do
        echo "iteration:${i}"
        ((i++))
        read -r aline
        echo "${aline}"
      done
    )
    echo "${testproc[@]}"
    echo "${testproc_PID}"
    echo input1 >&"${testproc[1]}"
    read -r output1a <&"${testproc[0]}"
    read -r output1b <&"${testproc[0]}"
    echo "${output1a}"
    echo "${output1b}"
    echo input2 >&"${testproc[1]}"
    read -r output2a <&"${testproc[0]}"
    read -r output2b <&"${testproc[0]}"
    echo "${output2a}"
    echo "${output2b}"
    

    [Jul 02, 2020] Associative arrays in Bash by Seth Kenlon

    Apr 02, 2020 | opensource.com

    Originally from: Get started with Bash scripting for sysadmins - Opensource.com

    Most shells offer the ability to create, manipulate, and query indexed arrays. In plain English, an indexed array is a list of things prefixed with a number. This list of things, along with their assigned number, is conveniently wrapped up in a single variable, which makes it easy to "carry" it around in your code.

    Bash, however, includes the ability to create associative arrays and treats these arrays the same as any other array. An associative array lets you create lists of key and value pairs, instead of just numbered values.

    The nice thing about associative arrays is that keys can be arbitrary:

    $ declare -A userdata
    $ userdata [ name ] =seth
    $ userdata [ pass ] =8eab07eb620533b083f241ec4e6b9724
    $ userdata [ login ] = ` date --utc + % s `

    Query any key:

    $ echo " ${userdata[name]} "
    seth
    $ echo " ${userdata[login]} "
    1583362192

    Most of the usual array operations you'd expect from an array are available.

    Resources

    [Jul 02, 2020] DevOps is a Myth Effective Software Delivery Enablement

    Jul 02, 2020 | otomato.link

    DevOps is a Myth

    Tags : Agile Books DevOps IT management software delivery

    Category : Tools (Practitioner's Reflections on The DevOps Handbook) The Holy Wars of DevOps

    Yet another argument explodes online around the 'true nature of DevOps', around 'what DevOps really means' or around 'what DevOps is not'. At each conference I attend we talk about DevOps culture , DevOps mindset and DevOps ways . All confirming one single truth – DevOps is a myth . /img/sapiens.jpg

    Now don't get me wrong – in no way is this a negation of its validity or importance. As Y.N.Harrari shows so eloquently in his book 'Sapiens' – myths were the forming power in the development of humankind. It is in fact our ability to collectively believe in these non-objective, imagined realities that allows us to collaborate at large scale, to coordinate our actions, to build pyramids, temples, cities and roads.

    There's a Handbook!

    I am writing this while finishing the exceptionally well written "DevOps Handbook" . If you really want to know what stands behind the all-too-often misinterpreted buzzword – you better read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in DevOps. And it comes from the folks who invented the term and have been busy developing its main concepts over the last 7 years.


    Now notice – I'm only saying you should read the "DevOps Handbook" if you want to understand what DevOps is about. After finishing it I'm pretty sure you won't have any interest in participating in petty arguments along the lines of 'is DevOps about automation or not?'. But I'm not saying you should read the handbook if you want to know how to improve and speed up your software manufacturing and delivery processes. And neither if you want to optimize your IT organization for innovation and continuous improvement.

    Because the main realization that you, as a smart reader, will arrive at – is just that there is no such thing as DevOps. DevOps is a myth .

    So What's The Story?

    It all basically comes down to this: some IT companies achieve better results than others . Better revenues, higher customer and employee satisfaction, faster value delivery, higher quality. There's no one-size-fits-all formula, there is no magic bullet – but we can learn from these high performers and try to apply certain tools and practices in order to improve the way we work and achieve similar or better results. These tools and processes come from a myriad of management theories and practices. Moreover – they are constantly evolving, so we need to always be learning. But at least we have the promise of better life. That is if we get it all right: the people, the architecture, the processes, the mindset, the org structure, etc.

    So it's not about certain tools, cause the tools will change. And it's not about certain practices – because we're creative and frameworks come and go. I don't see too many folks using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts today) And then the speakers at the next fancy conference will tell you it's mainly about culture. And you know what culture is? It's just a story, or rather a collection of stories that a group of people share. Stories that tell us something about the world and about ourselves. Stories that have only a very relative connection to the material world. Stories that can easily be proven as myths by another group of folks who believe them to be wrong.

    But Isn't It True?

    Anybody who's studied management theories knows how the approaches have changed since the beginning of the last century. From Taylor's scientific management and down to McGregor's X&Y theory they've all had their followers. Managers who've applied them and swore getting great results thanks to them. And yet most of these theories have been proven wrong by their successors.

    In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn't deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

    You may say that the world is changing fast – that's why we now need new approaches! And I agree – the technology, the globalization, the flow of information – they all change the stories we live in. But this also means that whatever is working for someone else today won't probably work for you tomorrow – because the world will change yet again.

    Which means that the DevOps Handbook – while a great overview and historical document and a source of inspiration – should not be taken as a guide to action. It's just another step towards establishing the DevOps myth.

    And that takes us back to where we started – myths and stories aren't bad in themselves. They help us collaborate by providing a common semantic system and shared goals. But they only work while we believe in them and until a new myth comes around – one powerful enough to grab our attention.

    Your Own DevOps Story

    So if we agree that DevOps is just another myth, what are we left with? What do we at Otomato and other DevOps consultants and vendors have to sell? Well, it's the same thing we've been building even before the DevOps buzz: effective software delivery and IT management. Based on tools and processes, automation and effective communication. Relying on common sense and on being experts in whatever myth is currently believed to be true.

    As I keep saying – culture is a story you tell. And we make sure to be experts in both the storytelling and the actual tooling and architecture. If you're currently looking at creating a DevOps transformation or simply want to optimize your software delivery – give us a call. We'll help to build your authentic DevOps story, to train your staff and to architect your pipeline based on practice, skills and your organization's actual needs. Not based on myths that other people tell.

    [Jul 02, 2020] Import functions and variables into Bash with the source command by Seth Kenlon

    Jun 12, 2020 | opensource.com
    Source is like a Python import or a Java include. Learn it to expand your Bash prowess. Seth Kenlon (Red Hat) Feed 25 up 2 comments Image by : Opensource.com x Subscribe now

    When you log into a Linux shell, you inherit a specific working environment. An environment , in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a command not found error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.

    There are many ways to affect your shell environment. You can make modifications in configuration files, such as ~/.bashrc and ~/.profile , you can run services at startup, and you can create your own custom commands or script your own Bash functions .

    Add to your environment with source

    Bash (along with some other shells) has a built-in command called source . And here's where it can get confusing: source performs the same function as the command . (yes, that's but a single dot), and it's not the same source as the Tcl command (which may come up on your screen if you type man source ). The built-in source command isn't in your PATH at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can type help source .

    The . command is POSIX -compliant. The source command is not defined by POSIX but is interchangeable with the . command.

    More on Bash According to Bash help , the source command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute with source happens within and affects your current environment.

    Before exploring how source can affect your environment, try source on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file called hello.sh :

    #!/usr/bin/env bash
    echo "hello world"

    Using source , you can run this script even without setting the executable bit:

    $ source hello.sh
    hello world

    You can also use the built-in . command for the same results:

    $ . hello.sh
    hello world

    The source and . commands successfully execute the contents of the test file.

    Set variables and import functions

    You can use source to "import" a file into your shell environment, just as you might use the include keyword in C or C++ to reference a library or the import keyword in Python to bring in a module. This is one of the most common uses for source , and it's a common default inclusion in .bashrc files to source a file called .bash_aliases so that any custom aliases you define get imported into your environment when you log in.

    Here's an example of importing a Bash function. First, create a function in a file called myfunctions . This prints your public IP address and your local IP address:

    function myip () {
    curl http: // icanhazip.com

    ip addr | grep inet $IP | \
    cut -d "/" -f 1 | \
    grep -v 127 \.0 | \
    grep -v \:\: 1 | \
    awk '{$1=$1};1'
    }

    Import the function into your shell:

    $ source myfunctions

    Test your new function:

    $ myip
    93.184.216.34
    inet 192.168.0.23
    inet6 fbd4:e85f:49c: 2121 :ce12:ef79:0e77:59d1
    inet 10.8.42.38 Search for source

    When you use source in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.

    If Bash can't find the file to execute, it searches your PATH instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.

    These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of /usr/include , so no matter where you are when you source them, Bash finds them.

    For instance, you could create a directory called ~/.local/include as a storage area for common functions and then put this block of code into your .bashrc file:

    for i in $HOME / .local / include /* ; do 
       source $i
    done

    This "imports" any file containing custom functions in ~/.local/include into your shell environment.

    Bash is the only shell that searches both the current directory and your PATH when you use either the source or the . command.

    Using source for open source

    Using source or . to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your .bashrc file, consider placing related functions or groups of aliases into dedicated files, and then use source to ingest them.

    Get started with Bash scripting for sysadmins Learn the commands and features that make Bash one of the most powerful shells available.

    Seth Kenlon (Red Hat) Introduction to automation with Bash scripts In the first article in this four-part series, learn how to create a simple shell script and why they are the best way to automate tasks.

    David Both (Correspondent) Bash cheat sheet: Key combos and special syntax Download our new cheat sheet for Bash commands and shortcuts you need to talk to your computer.

    [Jul 01, 2020] Use curl to test an application's endpoint or connectivity to an upstream service endpoint

    Notable quotes:
    "... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
    Jul 01, 2020 | opensource.com

    curl

    curl transfers a URL. Use this command to test an application's endpoint or connectivity to an upstream service endpoint. c url can be useful for determining if your application can reach another service, such as a database, or checking if your service is healthy.

    As an example, imagine your application throws an HTTP 500 error indicating it can't reach a MongoDB database:

    $ curl -I -s myapplication: 5000
    HTTP / 1.0 500 INTERNAL SERVER ERROR

    The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop:

    $ curl -I -s database: 27017
    HTTP / 1.0 200 OK

    So what could be the problem? Check if your application can get to other places besides the database from the application host:

    $ curl -I -s https: // opensource.com
    HTTP / 1.1 200 OK

    That seems to be okay. Now try to reach the database from the application host. Your application is using the database's hostname, so try that first:

    $ curl database: 27017
    curl: ( 6 ) Couldn 't resolve host ' database '

    This indicates that your application cannot resolve the database because the URL of the database is unavailable or the host (container or VM) does not have a nameserver it can use to resolve the hostname.

    [Jul 01, 2020] Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more by Valentin Bajrami

    A moderately interesting example here is the example of changing sudo systemctl start into sudo systemctl stop via !!:s/status/start/
    But it probably can be optimized so that you do not need to type start (it can be deleted as the last word). So you can try !0 stop instead
    Jul 01, 2020 | www.redhat.com

    See also Bash bang commands- A must-know trick for the Linux command line - Enable Sysadmin

    Let's say I run the following command:

    $> sudo systemctl status sshd

    Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in history , so I can reference it. I simply run:

    $> !!:s/status/start/
    sudo systemctl start sshd

    The above expression has the following content:

    The result is that the sshd service is started.

    Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:

    $> echo "HISTSIZE=5000" >> ~/.bashrc && source ~/.bashrc

    What if I want to display the last three commands in my history? I enter:

    $> history 3
     1002  ls
     1003  tail audit.log
     1004  history 3

    I run tail on audit.log by referring to the history line number. In this case, I use line 1003:

    $> !1003
    tail audit.log
    Reference the last argument of the previous command

    When I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:

    $> pwd
    /home/username/
    $> ls some/very/long/path/to/some/directory
    foo-file bar-file baz-file

    In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.

    If I want to cd (change directory) to that location, I enter something like this:

    $> cd $_
    
    $> pwd
    /home/username/some/very/long/path/to/some/directory

    Now simply use a dash character to go back to where I was:

    $> cd -
    $> pwd
    /home/username/

    [Jun 28, 2020] Top 10 Resources to Learn Shell Scripting for Free

    Jun 28, 2020 | itsfoss.com

    me title=

    Primis Player Placeholder

    me title=

    me scrolling=

    me width=

    Top Free Resources to Learn Shell Scripting
    Learn Shell Scripting <img data-attachment-id="80431" data-permalink="https://itsfoss.com/shell-scripting-resources/learn-shell-scripting/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=800%2C450&amp;ssl=1" data-orig-size="800,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Learn-Shell-Scripting" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=300%2C169&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=800%2C450&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?ssl=1" alt="Learn Shell Scripting" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?resize=300%2C169&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Don't have Linux installed on your system? No, worries. There are various ways of using Linux terminal on Windows . You may also use online Linux terminals in some cases to practice shell scripting.

    1. Learn Shell [Interactive web portal]
    Learnshell <img data-attachment-id="80374" data-permalink="https://itsfoss.com/shell-scripting-resources/learnshell/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=800%2C594&amp;ssl=1" data-orig-size="800,594" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="learnshell" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=300%2C223&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=800%2C594&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?ssl=1" alt="Learnshell" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?resize=300%2C223&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?resize=768%2C570&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    If you're looking for an interactive web portal to learn shell scripting and also try it online, Learn Shell is a great place to start.

    It covers the basics and offers some advanced exercises as well. The content is usually brief and to the point hence, I'd recommend you to check this out.

    Learn Shell 2. Shell Scripting Tutorial [Web portal]
    Shell Scripting Tutorial <img data-attachment-id="80381" data-permalink="https://itsfoss.com/shell-scripting-resources/shell-scripting-tutorial/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=800%2C377&amp;ssl=1" data-orig-size="800,377" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="shell-scripting-tutorial" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=300%2C141&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=800%2C377&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?ssl=1" alt="Shell Scripting Tutorial" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?resize=300%2C141&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?resize=768%2C362&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Shell scripting tutorial is web resource that's completely dedicated for shell scripting. You can choose to read the resource for free or can opt to purchase the PDF, book, or the e-book to support it.

    Of course, paying for the paperback edition or the e-book is optional. But, the resource should come in handy for free.

    Shell Scripting Tutorial 3. Shell Scripting Udemy (Free video course)
    Shell Scripting Udemy <img data-attachment-id="80376" data-permalink="https://itsfoss.com/shell-scripting-resources/shell-scripting-udemy/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=800%2C375&amp;ssl=1" data-orig-size="800,375" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="shell-scripting-udemy" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=300%2C141&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=800%2C375&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?ssl=1" alt="Shell Scripting Udemy" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?resize=300%2C141&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?resize=768%2C360&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Udemy is unquestionably one of the most popular platforms for online courses. And, in addition to the paid certified courses, it also offers some free stuff that does not include certifications.

    Shell Scripting is one of the most recommended free course available on Udemy for free. You can enroll in it without spending anything.

    Shell Scripting Udemy 4. Bash Shell Scripting Udemy (Free video course)
    Bash Shell Scripting <img data-attachment-id="80377" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-shell-scripting/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=800%2C461&amp;ssl=1" data-orig-size="800,461" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="bash-shell-scripting" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=300%2C173&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=800%2C461&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?ssl=1" alt="Bash Shell Scripting" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?resize=300%2C173&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?resize=768%2C443&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Yet another interesting free course focused on bash shell scripting on Udemy. Compared to the previous one, this resource seems to be more popular. So, you can enroll in it and see what it has to offer.

    Not to forget that the free Udemy course does not offer any certifications. But, it's indeed an impressive free shell scripting learning resource.

    5. Bash Academy [online portal with interactive game]
    The Bash Academy <img data-attachment-id="80378" data-permalink="https://itsfoss.com/shell-scripting-resources/the-bash-academy/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=800%2C332&amp;ssl=1" data-orig-size="800,332" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="the-bash-academy" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=300%2C125&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=800%2C332&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?ssl=1" alt="The Bash Academy" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?resize=300%2C125&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?resize=768%2C319&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    As the name suggests, the bash academy is completely focused on educating the users about bash shell.

    It's suitable for both beginners and experienced users even though it does not offer a lot of content. Not just limited to the guide -- but it also used to offer an interactive game to practice which no longer works.

    Hence, if this is interesting enough, you can also check out its GitHub page and fork it to improve the existing resources if you want.

    Bash Academy 6. Bash Scripting LinkedIn Learning (Free video course)
    Learn Bash Scripting Linkedin <img data-attachment-id="80379" data-permalink="https://itsfoss.com/shell-scripting-resources/learn-bash-scripting-linkedin/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=800%2C420&amp;ssl=1" data-orig-size="800,420" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="learn-bash-scripting-linkedin" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=300%2C158&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=800%2C420&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?ssl=1" alt="Learn Bash Scripting Linkedin" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?resize=300%2C158&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?resize=768%2C403&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    LinkedIn offers a number of free courses to help you improve your skills and get ready for more job opportunities. You will also find a couple of courses focused on shell scripting to brush up some basic skills or gain some advanced knowledge in the process.

    Here, I've linked a course for bash scripting, you can find some other similar courses for free as well.

    Bash Scripting (LinkedIn Learning) 7. Advanced Bash Scripting Guide [Free PDF book]
    Advanced Bash Scripting Pdf <img data-attachment-id="80380" data-permalink="https://itsfoss.com/shell-scripting-resources/advanced-bash-scripting-pdf/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=800%2C486&amp;ssl=1" data-orig-size="800,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="advanced-bash-scripting-pdf" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=300%2C182&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=800%2C486&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?ssl=1" alt="Advanced Bash Scripting Pdf" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?resize=300%2C182&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?resize=768%2C467&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    An impressive advanced bash scripting guide available in the form of PDF for free. This PDF resource does not enforce any copyrights and is completely free in the public domain.

    Even though the resource is focused on providing advanced insights. It's also suitable for beginners to refer this resource and start to learn shell scripting.

    Advanced Bash Scripting Guide [PDF] 8. Bash Notes for Professionals [Free PDF book]
    Bash Notes For Professional <img data-attachment-id="80429" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-notes-for-professional/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=800%2C400&amp;ssl=1" data-orig-size="800,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Bash-Notes-for-Professional" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=300%2C150&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=800%2C400&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?ssl=1" alt="Bash Notes For Professional" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?resize=768%2C384&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    This is good reference guide if you are already familiar with Bash Shell scripting or if you just want a quick summary.

    This free downloadable book runs over 100 pages and covers a wide variety of scripting topics with the help of brief description and quick examples.

    Download Bash Notes for Professional 9. Tutorialspoint [Web portal]
    Tutorialspoint Shell <img data-attachment-id="80375" data-permalink="https://itsfoss.com/shell-scripting-resources/tutorialspoint-shell/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=800%2C647&amp;ssl=1" data-orig-size="800,647" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="tutorialspoint-shell" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=300%2C243&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=800%2C647&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?ssl=1" alt="Tutorialspoint Shell" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?resize=300%2C243&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?resize=768%2C621&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Tutorialspoint is a quite popular web portal to learn a variety of programming languages . I would say this is quite good for starters to learn the fundamentals and the basics.

    This may not be suitable as a detailed resource -- but it should be a useful one for free.

    Tutorialspoint 10. City College of San Francisco Online Notes [Web portal]
    Scripting Notes Ccsf <img data-attachment-id="80382" data-permalink="https://itsfoss.com/shell-scripting-resources/scripting-notes-ccsf/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&amp;ssl=1" data-orig-size="800,291" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="scripting-notes-ccsf" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=300%2C109&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?ssl=1" alt="Scripting Notes Ccsf" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=300%2C109&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=768%2C279&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    This may not be the best free resource there is -- but if you're ready to explore every type of resource to learn shell scripting, why not refer to the online notes of City College of San Francisco?

    I came across this with a random search on the Internet about shell scripting resources.

    Again, it's important to note that the online notes could be a bit dated. But, it should be an interesting resource to explore.

    City College of San Francisco Notes Honorable mention: Linux Man Page
    Bash Linux Man Page <img data-attachment-id="80383" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-linux-man-page/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&amp;ssl=1" data-orig-size="800,437" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="bash-linux-man-page" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=300%2C164&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?ssl=1" alt="Bash Linux Man Page" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=300%2C164&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=768%2C420&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Not to forget, the man page for bash should also be a fantastic free resource to explore more about the commands and how it works.

    Even if it's not tailored as something that lets you master shell scripting, it is still an important web resource that you can use for free. You can either choose to visit the man page online or just head to the terminal and type the following command to get help:

    man bash
    
    Wrapping Up

    There are also a lot of popular paid resources just like some of the best Linux books available out there. It's easy to start learning about shell scripting using some free resources available across the web.

    In addition to the ones I've mentioned, I'm sure there must be numerous other resources available online to help you learn shell scripting.

    Do you like the resources mentioned above? Also, if you're aware of a fantastic free resource that I possibly missed, feel free to tell me about it in the comments below.


    Skip Ad

    about:blank

    about:blank

    javascript:void(0)

    javascript:void(0)

    Like what you read? Please share it with others.

    28Shares

    Filed Under: List Tagged With: resources , shell

    <img alt='' src='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=90&#038;d=mm&#038;r=g' srcset='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=180&#038;d=mm&#038;r=g 2x' class='avatar avatar-90 photo' height='90' width='90' /> About Ankush Das A passionate technophile who also happens to be a Computer Science graduate. You will usually see cats dancing to the beautiful tunes sung by him. comment_count comments Newest Newest Oldest Most Liked Comment as a guest:

    [Jun 26, 2020] Vim show line numbers by default on Linux

    Notable quotes:
    "... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
    "... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
    Feb 29, 2020 | www.cyberciti.biz

    How do I show line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

    By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions.

    .... Let us see how to display the line number in vim permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

    By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions. My experience shows that line numbers are useful for debugging shell scripts, program code, and configuration files. Let us see how to display the line number in vim permanently.

    Vim show line numbers by default

    Turn on absolute line numbering by default in vim:

    1. Open vim configuration file ~/.vimrc by typing the following command:
      vim ~/.vimrc
    2. Append set number
    3. Press the Esc key
    4. To save the config file, type :w and hit Enter key
    5. You can temporarily disable the absolute line numbers within vim session, type:
      :set nonumber
    6. Want to enable disabled the absolute line numbers within vim session? Try:
      :set number
    7. We can see vim line numbers on the left side.
    Relative line numbers

    Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. Once again edit the ~/vimrc, run:
    vim ~/vimrc
    Finally, turn relative line numbers on:
    set relativenumber
    Save and close the file in vim text editor.
    VIM relative line numbers

    How to show "Hybrid" line numbers in Vim by default

    What happens when you put the following two config directives in ~/.vimrc ?
    set number
    set relativenumber

    That is right. We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers.

    Conclusion

    Today we learned about permanent line number settings for the vim text editor. By adding the "set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show line numbers each time vim started. See vim docs here for more info and following tutorials too:

    [Jun 26, 2020] Taking a deeper dive into Linux chroot jails by Glen Newell

    Notable quotes:
    "... New to Linux containers? Download the Containers Primer and learn the basics. ..."
    Mar 02, 2020 | www.redhat.com

    Dive deeper into the chroot command and learn how to isolate specific services and specific users.

    More Linux resources

    In part one, How to setup Linux chroot jails, I covered the chroot command and you learned to use the chroot wrapper in sshd to isolate the sftpusers group. When you edit sshd_config to invoke the chroot wrapper and give it matching characteristics, sshd executes certain commands within the chroot jail or wrapper. You saw how this technique could potentially be useful to implement contained, rather than secure, access for remote users.

    Expanded example

    I'll start by expanding on what I did before, partly as a review. Start by setting up a custom directory for remote users. I'll use the sftpusers group again.

    Start by creating the custom directory that you want to use, and setting the ownership:

    # mkdir -p /sftpusers/chroot
    # chown root:root /sftpusers/chroot

    This time, make root the owner, rather than the sftpusers group. This way, when you add users, they don't start out with permission to see the whole directory.

    Next, create the user you want to restrict (you need to do this for each user in this case), add the new user to the sftpusers group, and deny a login shell because these are sftp users:

    # useradd sanjay -g sftpusers -s /sbin/nologin
    # passwd sanjay
    

    Then, create the directory for sanjay and set the ownership and permissions:

    # mkdir /sftpusers/chroot/sanjay
    # chown sanjay:sftpusers /sftpusers/chroot/sanjay
    # chmod 700 /sftpusers/chroot/sanjay

    Next, edit the sshd_config file. First, comment out the existing subsystem invocation and add the internal one:

    #Subsystem sftp /usr/libexec/openssh/sftp-server
    Subsystem sftp internal-sftp
    

    Then add our match case entry:

    Match Group sftpusers
    ChrootDirectory /sftpusers/chroot/
    ForceCommand internal-sftp
    X11Forwarding no
    AllowTCPForwarding no
    

    Note that you're back to specifying a directory, but this time, you have already set the ownership to prevent sanjay from seeing anyone else's stuff. That trailing / is also important.

    Then, restart sshd and test:

    [skipworthy@milo ~]$ sftp sanjay@showme
    sanjay@showme's password:
    Connected to sanjay@showme.
    sftp> ls
    sanjay
    sftp> pwd
    Remote working directory: /
    sftp> cd ..
    sftp> ls
    sanjay
    sftp> touch test
    Invalid command.
    

    So. Sanjay can only see his own folder and needs to cd into it to do anything useful.

    Isolating a service or specific user

    Now, what if you want to provide a usable shell environment for a remote user, or create a chroot jail environment for a specific service? To do this, create the jailed directory and the root filesystem, and then create links to the tools and libraries that you need. Doing all of this is a bit involved, but Red Hat provides a script and basic instructions that make the process easier.

    Note: I've tested the following in Red Hat Enterprise Linux 7 and 8, though my understanding is that this capability was available in Red Hat Enterprise Linux 6. I have no reason to think that this script would not work in Fedora, CentOS or any other Red Hat distro, but your mileage (as always) may vary.

    First, make your chroot directory:

    # mkdir /chroot
    

    Then run the script from yum that installs the necessary bits:

    # yum --releasever=/ --installroot=/chroot install iputils vim python
    

    The --releasever=/ flag passes the current local release info to initialize a repo in the new --installroot , defines where the new install location is. In theory, you could make a chroot jail that was based on any version of the yum or dnf repos (the script will, however, still start with the current system repos).

    With this tool, you install basic networking utilities like the VIM editor and Python. You could add other things initially if you want to, including whatever service you want to run inside this jail. This is also one of the cool things about yum and dependencies. As part of the dependency resolution, yum makes the necessary additions to the filesystem tree along with the libraries. It does, however, leave out a couple of things that you need to add next. I'll will get to that in a moment.

    By now, the packages and the dependencies have been installed, and a new GPG key was created for this new repository in relation to this new root filesystem. Next, mount your ephemeral filesystems:

    # mount -t proc proc /chroot/proc/
    # mount -t sysfs sys /chroot/sys/
    

    And set up your dev bindings:

    # mount -o bind /dev/pts /chroot/dev/pts
    # mount -o bind /dev/pts /chroot/dev/pts
    

    Note that these mounts will not survive a reboot this way, but this setup will let you test and play with a chroot jail environment.

    Now, test to check that everything is working as you expect:

    # chroot /chroot
    bash-4.2# ls
    bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr
    

    You can see that the filesystem and libraries were successfully added:

    bash-4.2# pwd
    /
    bash-4.2# cd ..
    

    From here, you see the correct root and can't navigate up:

    bash-4.2# exit
    exit
    #
    

    Now you've exited the chroot wrapper, which is expected because you entered it from a local login shell as root. Normally, a remote user should not be able to do this, as you saw in the sftp example:

    [skipworthy@milo ~]$ ssh root@showme
    root@showme's password:
    [root@showme1 ~]# chroot /chroot
    bash-4.2#
    

    Note that these directories were all created by root, so that's who owns them. Now, add this chroot to the sshd_config , because this time you will match just this user:

    Match User leo
    ChrootDirectory /chroot
    

    Then, restart sshd .

    You also need to copy the /etc/passwd and /etc/group files from the host system to the /chroot directory:

    [root@showme1 ~]# cp -vf /etc/{passwd,group} /chroot/etc/
    

    Note: If you skip the step above, you can log in, but the result will be unreliable and you'll be prone to errors related to conflicting logins

    Now for the test:

    [skipworthy@milo ~]$ ssh leo@showme
    leo@showme's password:
    Last login: Thu Jan 30 19:35:36 2020 from 192.168.0.20
    -bash-4.2$ ls
    -bash-4.2$ pwd
    /home/leo
    

    It looks good. Now, can you find something useful to do? Let's have some fun:

    [root@showme1 ~]# yum --releasever=/ --installroot=/chroot install httpd
    

    You could drop the releasever=/ , but I like to leave that in because it leaves fewer chances for unexpected results.

    [root@showme1 ~]# chroot /chroot
    bash-4.2# ls /etc/httpd
    conf conf.d conf.modules.d logs modules run
    bash-4.2# python
    Python 2.7.5 (default, Aug 7 2019, 00:51:29)
    

    So, httpd is there if you want it, but just to demonstrate you can use a quick one-liner from Python, which you also installed:

    bash-4.2# python -m SimpleHTTPServer 8000
    Serving HTTP on 0.0.0.0 port 8000 ...
    

    And now you have a simple webserver running in a chroot jail. In theory, you can run any number of services from inside the chroot jail and keep them 'contained' and away from other services, allowing you to expose only a part of a larger resource environment without compromising your user's experience.

    New to Linux containers? Download the Containers Primer and learn the basics.

    [Jun 10, 2020] A Beginners Guide to Snaps in Linux - Part 1 by Aaron Kili

    Jun 05, 2020 | www.tecmint.com
    Linux Certifications - RHCSA / RHCE Certification | Ansible Automation Certification | LFCS / LFCE Certification In the past few years, the Linux community has been blessed with some remarkable advancements in the area of package management on Linux systems , especially when it comes to universal or cross-distribution software packaging and distribution. One of such advancements is the Snap package format developed by Canonical , the makers of the popular Ubuntu Linux . What are Snap Packages?

    Snaps are cross-distribution, dependency-free, and easy to install applications packaged with all their dependencies to run on all major Linux distributions. From a single build, a snap (application) will run on all supported Linux distributions on desktop, in the cloud, and IoT. Supported distributions include Ubuntu, Debian, Fedora, Arch Linux, Manjaro, and CentOS/RHEL.

    Snaps are secure – they are confined and sandboxed so that they do not compromise the entire system. They run under different confinement levels (which is the degree of isolation from the base system and each other). More notably, every snap has an interface carefully selected by the snap's creator, based on the snap's requirements, to provide access to specific system resources outside of their confinement such as network access, desktop access, and more.

    Another important concept in the snap ecosystem is Channels . A channel determines which release of a snap is installed and tracked for updates and it consists of and is subdivided by, tracks, risk-levels, and branches.

    The main components of the snap package management system are:

    Besides, snaps also update automatically. You can configure when and how updates occur. By default, the snapd daemon checks for updates up to four times a day: each update check is called a refresh . You can also manually initiate a refresh.

    How to Install Snapd in Linux

    As described above, the snapd daemon is the background service that manages and maintains your snap environment on a Linux system, by implementing the confinement policies and controlling the interfaces that allow snaps to access specific system resources. It also provides the snap command and serves many other purposes.

    To install the snapd package on your system, run the appropriate command for your Linux distribution.

    ------------ [On Debian and Ubuntu] ------------ 
    $ sudo apt update 
    $ sudo apt install snapd
    
    ------------ [On Fedora Linux] ------------
    # dnf install snapd                     
    
    ------------ [On CentOS and RHEL] ------------
    # yum install epel-release 
    # yum install snapd             
    
    ------------ [On openSUSE - replace openSUSE_Leap_15.0 with the version] ------------
    $ sudo zypper addrepo --refresh https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.0 snappy
    $ sudo zypper --gpg-auto-import-keys refresh
    $ sudo zypper dup --from snappy
    $ sudo zypper install snapd
    
    ------------ [On Manjaro Linux] ------------
    # pacman -S snapd
    
    ------------ [On Arch Linux] ------------
    # git clone https://aur.archlinux.org/snapd.git
    # cd snapd
    # makepkg -si
    

    After installing snapd on your system, enable the systemd unit that manages the main snap communication socket, using the systemctl commands as follows.

    On Ubuntu and its derivatives, this should be triggered automatically by the package installer.

    $ sudo systemctl enable --now snapd.socket
    

    Note that you can't run the snap command if the snapd.socket is not running. Run the following commands to check if it is active and is enabled to automatically start at system boot.

    $ sudo systemctl is-active snapd.socket
    $ sudo systemctl status snapd.socket
    $ sudo systemctl is-enabled snapd.socket
    
    Check Snapd Service Status <img aria-describedby="caption-attachment-37630" src="https://www.tecmint.com/wp-content/uploads/2020/06/check-if-snapd-socket-is-running.png" alt="Check Snapd Service Status" width="722" height="293" />

    Check Snapd Service Status

    Next, enable classic snap support by creating a symbolic link between /var/lib/snapd/snap and /snap as follows.

    $ sudo ln -s /var/lib/snapd/snap /snap
    

    To check the version of snapd and snap command-line tool installed on your system, run the following command.

    $ snap version
    
    Check Snapd and Snap Version <img aria-describedby="caption-attachment-37631" src="https://www.tecmint.com/wp-content/uploads/2020/06/check-snapd-and-snap-version.png" alt="Check Snapd and Snap Version" width="510" height="124" />

    Check Snapd and Snap Version How to Install Snaps in Linux

    The snap command allows you to install, configure, refresh and remove snaps, and interact with the larger snap ecosystem.

    Before installing a snap , you can check if it exists in the snap store. For example, if the application belongs in the category of " chat servers " or " media players ", you can run these commands to search for it, which will query the store for available packages in the stable channel.

    $ snap find "chat servers"
    $ snap find "media players"
    
    Find Applications in Snap Store <img aria-describedby="caption-attachment-37632" src="https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps.png" alt="Find Applications in Snap Store" width="899" height="515" srcset="https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps.png 899w, https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps-768x440.png 768w" sizes="(max-width: 899px) 100vw, 899px" />

    Find Applications in Snap Store

    To show detailed information about a snap , for example, rocketchat-server , you can specify its name or path. Note that names are looked for both in the snap store and in the installed snaps.

    $ snap info rocketchat-server
    
    Get Info About Application in Snap <img aria-describedby="caption-attachment-37633" src="https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap.png" alt="Get Info About Application in Snap" width="824" height="485" srcset="https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap.png 824w, https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap-768x452.png 768w" sizes="(max-width: 824px) 100vw, 824px" />

    Get Info About Application in Snap

    To install a snap on your system, for example, rocketchat-server , run the following command. If no options are provided, a snap is installed tracking the " stable " channel, with strict security confinement.

    $ sudo snap install rocketchat-server
    
    Install Application from Snap Store <img aria-describedby="caption-attachment-37634" src="https://www.tecmint.com/wp-content/uploads/2020/06/rocketchat-server-snap-installed-successfully.png" alt="Install Application from Snap Store" width="700" height="57" />

    Install Application from Snap Store

    You can opt to install from a different channel: edge , beta , or candidate , for one reason or the other, using the --edge , --beta , or --candidate options respectively. Or use the --channel option and specify the channel you wish to install from.

    $ sudo snap install --edge rocketchat-server        
    $ sudo snap install --beta rocketchat-server
    $ sudo snap install --candidate rocketchat-server
    
    Manage Snaps in Linux

    In this section, we will learn how to manage snaps in Linux system.

    Viewing Installed Snaps

    To display a summary of snaps installed on your system, use the following command.

    $ snap list
    
    List Installed Snaps <img aria-describedby="caption-attachment-37635" src="https://www.tecmint.com/wp-content/uploads/2020/06/list-installed-snaps.png" alt="List Installed Snaps" width="662" height="156" />

    List Installed Snaps

    To list the current revision of a snap being used, specify its name. You can also list all its available revisions by adding the --all option.

    $ snap list mailspring
    OR
    $ snap list --all mailspring
    
    List All Installation Versions of Snap <img aria-describedby="caption-attachment-37636" src="https://www.tecmint.com/wp-content/uploads/2020/06/list-all-versions-of-a-snap.png" alt="List All Installation Versions of Snap" width="609" height="155" />

    List All Installation Versions of Snap Updating and Reverting Snaps

    You can update a specified snap, or all snaps in the system if none are specified as follows. The refresh command checks the channel being tracked by the snap and it downloads and installs a newer version of the snap if it is available.

    $ sudo snap refresh mailspring
    OR
    $ sudo snap refresh             #update all snaps on the local system
    
    Refresh a Snap <img aria-describedby="caption-attachment-37637" src="https://www.tecmint.com/wp-content/uploads/2020/06/refresh-a-snap.png" alt="Refresh a Snap" width="591" height="57" />

    Refresh a Snap

    After updating an app to a new version, you can revert to a previously used version using the revert command. Note that the data associated with the software will also be reverted.

    $ sudo snap revert mailspring
    
    Revert a Snap to Older Version <img aria-describedby="caption-attachment-37638" src="https://www.tecmint.com/wp-content/uploads/2020/06/revert-a-snap-to-older-version.png" alt="Revert a Snap to Older Version" width="450" height="56" />

    Revert a Snap to Older Version

    Now when you check all revisions of mailspring , the latest revision is disabled , a previously used revision is now active.

    $ snap list --all mailspring
    
    Check Revision of Snap <img aria-describedby="caption-attachment-37639" src="https://www.tecmint.com/wp-content/uploads/2020/06/snap-reverted-to-older-version.png" alt="Check Revision of Snap " width="601" height="90" />

    Check Revision of Snap Disabling/Enabling and Removing Snaps

    You can disable a snap if you do not want to use it. When disabled, a snap's binaries and services will no longer be available, however, all the data will still be there.

    $ sudo snap disable mailspring
    

    If you need to use the snap again, you can enable it back.

    $ sudo snap enable mailspring
    

    To completely remove a snap from your system, use the remove command. By default, all of a snap's revisions are removed.

    $ sudo snap remove mailspring
    

    To remove a specific revision, use the --revision option as follows.

    $ sudo snap remove  --revision=482 mailspring
    

    It is key to note that when you remove a snap , its data (such as internal user, system, and configuration data) is saved by snapd (version 2.39 and higher) as a snapshot, and stored on the system for 31 days. In case you reinstall the snap within the 31 days, you can restore the data.

    Conclusion

    Snaps are becoming more popular within the Linux community as they provide an easy way to install software on any Linux distribution. In this guide, we have shown how to install and work with snaps in Linux. We covered how to install snapd , install snaps , view installed snaps, update and revert snaps, and disable/enable and remove snaps.

    You can ask questions or reach us via the feedback form below. In the next part of this guide, we will cover managing snaps (commands, aliases, services, and snapshots) in Linux.

    [May 21, 2020] Watchman - A File and Directory Watching Tool for Changes

    May 21, 2020 | www.tecmint.com

    Watchman – A File and Directory Watching Tool for Changes

    by Aaron Kili | Published: March 14, 2019 | Last Updated: April 7, 2020

    Linux Certifications - RHCSA / RHCE Certification | Ansible Automation Certification | LFCS / LFCE Certification Watchman is an open source and cross-platform file watching service that watches files and records or performs actions when they change. It is developed by Facebook and runs on Linux, OS X, FreeBSD, and Solaris. It runs in a client-server model and employs the inotify utility of the Linux kernel to provide a more powerful notification. Useful Concepts of Watchman

    In this article, we will explain how to install and use watchman to watch (monitor) files and record when they change in Linux. We will also briefly demonstrate how to watch a directory and invoke a script when it changes.

    Installing Watchman File Watching Service in Linux

    We will install watchman service from sources, so first install these required dependencies libssl-dev , autoconf , automake libtool , setuptools , python-devel and libfolly using following command on your Linux distribution.

    ----------- On Debian/Ubuntu ----------- 
    $ sudo apt install autoconf automake build-essential python-setuptools python-dev libssl-dev libtool 
    
    ----------- On RHEL/CentOS -----------
    # yum install autoconf automake python-setuptools python-devel openssl-devel libssl-devel libtool 
    # yum groupinstall 'Development Tools' 
    
    ----------- On Fedora -----------
    $ sudo dnf install autoconf automake python-setuptools openssl-devel libssl-devel libtool 
    $ sudo dnf groupinstall 'Development Tools'
    

    Once required dependencies installed, you can start building watchman by downloading its github repository, move into the local repository, configure, build and install it using following commands.

    $ git clone https://github.com/facebook/watchman.git
    $ cd watchman
    $ git checkout v4.9.0  
    $ ./autogen.sh
    $ ./configure
    $ make
    $ sudo make install
    
    Watching Files and Directories with Watchman in Linux

    Watchman can be configured in two ways: (1) via the command-line while the daemon is running in background or (2) via a configuration file written in JSON format.

    To watch a directory (e.g ~/bin ) for changes, run the following command.

    $ watchman watch ~/bin/
    
    Watch a Directory in Linux <img aria-describedby="caption-attachment-32120" src="https://www.tecmint.com/wp-content/uploads/2019/03/watch-a-directory.png" alt="Watch a Directory in Linux" width="572" height="135" />

    Watch a Directory in Linux

    The following command writes a configuration file called state under /usr/local/var/run/watchman/<username>-state/ , in JSON format as well as a log file called log in the same location.

    You can view the two files using the cat command as show.

    $ cat /usr/local/var/run/watchman/aaronkilik-state/state
    $ cat /usr/local/var/run/watchman/aaronkilik-state/log
    

    You can also define what action to trigger when a directory being watched for changes. For example in the following command, ' test-trigger ' is the name of the trigger and ~bin/pav.sh is the script that will be invoked when changes are detected in the directory being monitored.

    For test purposes, the pav.sh script simply creates a file with a timestamp (i.e file.$time.txt ) within the same directory where the script is stored.

    time=`date +%Y-%m-%d.%H:%M:%S`
    touch file.$time.txt
    

    Save the file and make the script executable as shown.

    $ chmod +x ~/bin/pav.sh
    

    To launch the trigger, run the following command.

    $ watchman -- trigger ~/bin 'test-trigger' -- ~/bin/pav.sh
    
    Create a Trigger on Directory <img aria-describedby="caption-attachment-32121" src="https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger.png" alt="Create a Trigger on Directory" width="842" height="135" srcset="https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger.png 842w, https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger-768x123.png 768w" sizes="(max-width: 842px) 100vw, 842px" />

    Create a Trigger on Directory

    When you execute watchman to keep an eye on a directory, its added to the watch list and to view it, run the following command.

    $ watchman watch-list
    
    View Watch List <img aria-describedby="caption-attachment-32122" src="https://www.tecmint.com/wp-content/uploads/2019/03/view-watch-list.png" alt="View Watch List " width="572" height="173" />

    View Watch List

    To view the trigger list for a root , run the following command (replace ~/bin with the root name).

    $ watchman trigger-list ~/bin
    
    Show Trigger List for a Root <img aria-describedby="caption-attachment-32124" src="https://www.tecmint.com/wp-content/uploads/2019/03/show-trigger-list-for-a-root.png" alt="Show Trigger List for a Root" width="612" height="401" />

    Show Trigger List for a Root

    Based on the above configuration, each time the ~/bin directory changes, a file such as file.2019-03-13.23:14:17.txt is created inside it and you can view them using ls command .

    $ ls
    
    Test Watchman Configuration <img aria-describedby="caption-attachment-32123" src="https://www.tecmint.com/wp-content/uploads/2019/03/test-watchman-configuration.png" alt="Test Watchman Configuration" width="672" height="648" />

    Test Watchman Configuration Uninstalling Watchman Service in Linux

    If you want to uninstall watchman , move into the source directory and run the following commands:

    $ sudo make uninstall
    $ cd '/usr/local/bin' && rm -f watchman 
    $ cd '/usr/local/share/doc/watchman-4.9.0 ' && rm -f README.markdown
    

    For more information, visit the Watchman Github repository: https://github.com/facebook/watchman .

    You might also like to read these following related articles.

    1. Swatchdog – Simple Log File Watcher in Real-Time in Linux
    2. 4 Ways to Watch or Monitor Log Files in Real Time
    3. fswatch – Monitors Files and Directory Changes in Linux
    4. Pyintify – Monitor Filesystem Changes in Real Time in Linux
    5. Inav – Watch Apache Logs in Real Time in Linux

    Watchman is an open source file watching service that watches files and records, or triggers actions, when they change. Use the feedback form below to ask questions or share your thoughts with us.

    Sharing is Caring...

    [May 20, 2020] The mktemp Command Tutorial With Examples For Beginners

    May 20, 2020 | www.ostechnix.com

    Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.

    To create a new temporary file, simply run:

    $ mktemp
    

    You will see an output like below:

    /tmp/tmp.U0C3cgGFpk
    

    How To Create temporary file using mktemp command in Linux

    As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just an empty file.

    You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:

    $ mktemp --suffix ".txt"
    /tmp/tmp.sux7uKNgIA.txt
    

    How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.

    $ mktemp -d
    

    This will create a random empty directory in /tmp folder.

    Sample output:

    /tmp/tmp.PE7tDnm4uN
    

    Create temporary directory using mktemp command in Linux

    All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory will have read, write and executable permissions for the current user, but no permissions for groups or others.

    You can verify the file permissions using "ls" command:

    $ ls -al /tmp/tmp.U0C3cgGFpk
    -rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk
    

    Verify the directory permissions using "ls" command:

    $ ls -ld /tmp/tmp.PE7tDnm4uN
    drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN
    

    Check file and directory permissions in Linux


    Suggested read:


    Create temporary files or directories with custom names using mktemp command

    As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.

    $ mktemp ostechnixXXX
    ostechnixq70
    

    Similarly, to create directory, just run:

    $ mktemp -d ostechnixXXX
    ostechnixcBO
    

    Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp location . In this case, you need to manually clean up them.

    Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your choice.

    For instance, I want to add "blog" at the end of the filename. Hence, my command would be:

    $ mktemp ostechnixXXX --suffix=blog
    ostechnixZuZblog
    

    Now we do have the suffix "blog" at the end of the filename.

    If you don't want to create any file or directory, you can simply perform a dry run like below.

    $ mktemp -u
    /tmp/tmp.oK4N4U6rDG
    

    For help, run:

    $ mktemp --help
    
    Why do we actually need mktemp?

    You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories with permission 700(rwx), so the other users can't access it. For more details, check man pages.

    $ man mktemp
    

    [May 08, 2020] Configuring Unbound as a simple forwarding DNS server Enable Sysadmin

    May 08, 2020 | www.redhat.com

    In part 1 of this article, I introduced you to Unbound , a great name resolution option for home labs and small network environments. We looked at what Unbound is, and we discussed how to install it. In this section, we'll work on the basic configuration of Unbound.

    Basic configuration

    First find and uncomment these two entries in unbound.conf :

    interface: 0.0.0.0
    interface: ::0
    

    Here, the 0 entry indicates that we'll be accepting DNS queries on all interfaces. If you have more than one interface in your server and need to manage where DNS is available, you would put the address of the interface here.

    Next, we may want to control who is allowed to use our DNS server. We're going to limit access to the local subnets we're using. It's a good basic practice to be specific when we can:

    Access-control: 127.0.0.0/8 allow  # (allow queries from the local host)
    access-control: 192.168.0.0/24 allow
    access-control: 192.168.1.0/24 allow
    

    We also want to add an exception for local, unsecured domains that aren't using DNSSEC validation:

    domain-insecure: "forest.local"
    

    Now I'm going to add my local authoritative BIND server as a stub-zone:

    stub-zone:
            name: "forest"
            stub-addr: 192.168.0.220
            stub-first: yes
    

    If you want or need to use your Unbound server as an authoritative server, you can add a set of local-zone entries that look like this:

    local-zone:  "forest.local." static
    
    local-data: "jupiter.forest"         IN       A        192.168.0.200
    local-data: "callisto.forest"        IN       A        192.168.0.222
    

    These can be any type of record you need locally but note again that since these are all in the main configuration file, you might want to configure them as stub zones if you need authoritative records for more than a few hosts (see above).

    If you were going to use this Unbound server as an authoritative DNS server, you would also want to make sure you have a root hints file, which is the zone file for the root DNS servers.

    Get the file from InterNIC . It is easiest to download it directly where you want it. My preference is usually to go ahead and put it where the other unbound related files are in /etc/unbound :

    wget https://www.internic.net/domain/named.root -O /etc/unbound/root.hints
    

    Then add an entry to your unbound.conf file to let Unbound know where the hints file goes:

    # file to read root hints from.
            root-hints: "/etc/unbound/root.hints"
    

    Finally, we want to add at least one entry that tells Unbound where to forward requests to for recursion. Note that we could forward specific domains to specific DNS servers. In this example, I'm just going to forward everything out to a couple of DNS servers on the Internet:

    forward-zone:
            name: "."
            forward-addr: 1.1.1.1
            forward-addr: 8.8.8.8
    

    Now, as a sanity check, we want to run the unbound-checkconf command, which checks the syntax of our configuration file. We then resolve any errors we find.

    [root@callisto ~]# unbound-checkconf
    /etc/unbound/unbound_server.key: No such file or directory
    [1584658345] unbound-checkconf[7553:0] fatal error: server-key-file: "/etc/unbound/unbound_server.key" does not exist
    

    This error indicates that a key file which is generated at startup does not exist yet, so let's start Unbound and see what happens:

    [root@callisto ~]# systemctl start unbound
    

    With no fatal errors found, we can go ahead and make it start by default at server startup:

    [root@callisto ~]# systemctl enable unbound
    Created symlink from /etc/systemd/system/multi-user.target.wants/unbound.service to /usr/lib/systemd/system/unbound.service.
    

    And you should be all set. Next, let's apply some of our DNS troubleshooting skills to see if it's working correctly.

    First, we need to set our DNS resolver to use the new server:

    [root@showme1 ~]# nmcli con mod ext ipv4.dns 192.168.0.222
    [root@showme1 ~]# systemctl restart NetworkManager
    [root@showme1 ~]# cat /etc/resolv.conf
    # Generated by NetworkManager
    nameserver 192.168.0.222
    [root@showme1 ~]#
    

    Let's run dig and see who we can see:

    root@showme1 ~]# dig; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>>
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36486
    ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 4096
    ;; QUESTION SECTION:
    ;.                              IN      NS
    
    ;; ANSWER SECTION:
    .                       508693  IN      NS      i.root-servers.net.
    <snip>
    

    Excellent! We are getting a response from the new server, and it's recursing us to the root domains. We don't see any errors so far. Now to check on a local host:

    ;; ANSWER SECTION:
    jupiter.forest.         5190    IN      A       192.168.0.200
    

    Great! We are getting the A record from the authoritative server back, and the IP address is correct. What about external domains?

    ;; ANSWER SECTION:
    redhat.com.             3600    IN      A       209.132.183.105
    

    Perfect! If we rerun it, will we get it from the cache?

    ;; ANSWER SECTION:
    redhat.com.             3531    IN      A       209.132.183.105
    
    ;; Query time: 0 msec
    ;; SERVER: 192.168.0.222#53(192.168.0.222)
    

    Note the Query time of 0 seconds- this indicates that the answer lives on the caching server, so it wasn't necessary to go ask elsewhere. This is the main benefit of a local caching server, as we discussed earlier.

    Wrapping up

    While we did not discuss some of the more advanced features that are available in Unbound, one thing that deserves mention is DNSSEC. DNSSEC is becoming a standard for DNS servers, as it provides an additional layer of protection for DNS transactions. DNSSEC establishes a trust relationship that helps prevent things like spoofing and injection attacks. It's worth looking into a bit if you are using a DNS server that faces the public even though It's beyond the scope of this article.

    [ Getting started with networking? Check out the Linux networking cheat sheet . ]

    [May 06, 2020] How to Synchronize Directories Using Lsyncd on Ubuntu 20.04

    May 06, 2020 | www.howtoforge.com

    Configure Lsyncd to Synchronize Local Directories

    In this section, we will configure Lsyncd to synchronize /etc/ directory to /mnt/ directory on local system.

    First, create a directory for Lsyncd with the following command:

    mkdir /etc/lsyncd
    

    Next, create a new Lsyncd configuration file and define the source and destination directory that you want to sync.

    nano /etc/lsyncd/lsyncd.conf.lua
    

    Add the following lines:

    settings {
            logfile = "/var/log/lsyncd/lsyncd.log",
            statusFile = "/var/log/lsyncd/lsyncd.status",
       statusInterval = 20,
       nodaemon   = false
    }
    
    sync {
            default.rsync,
            source = "/etc/",
            target = "/mnt"
    }
    

    Save and close the file when you are finished.

    systemctl start lsyncd
    systemctl enable lsyncd
    

    You can also check the status of the Lsyncd service with the following command:

    systemctl status lsyncd
    

    You should see the following output:

    ? lsyncd.service - LSB: lsyncd daemon init script
         Loaded: loaded (/etc/init.d/lsyncd; generated)
         Active: active (running) since Fri 2020-05-01 03:31:20 UTC; 9s ago
           Docs: man:systemd-sysv-generator(8)
        Process: 36946 ExecStart=/etc/init.d/lsyncd start (code=exited, status=0/SUCCESS)
          Tasks: 2 (limit: 4620)
         Memory: 12.5M
         CGroup: /system.slice/lsyncd.service
                 ??36921 /usr/bin/lsyncd -pidfile /var/run/lsyncd.pid /etc/lsyncd/lsyncd.conf.lua
                 ??36952 /usr/bin/lsyncd -pidfile /var/run/lsyncd.pid /etc/lsyncd/lsyncd.conf.lua
    
    May 01 03:31:20 ubuntu20 systemd[1]: lsyncd.service: Succeeded.
    May 01 03:31:20 ubuntu20 systemd[1]: Stopped LSB: lsyncd daemon init script.
    May 01 03:31:20 ubuntu20 systemd[1]: Starting LSB: lsyncd daemon init script...
    May 01 03:31:20 ubuntu20 lsyncd[36946]:  * Starting synchronization daemon lsyncd
    May 01 03:31:20 ubuntu20 lsyncd[36951]: 03:31:20 Normal: --- Startup, daemonizing ---
    May 01 03:31:20 ubuntu20 lsyncd[36946]:    ...done.
    May 01 03:31:20 ubuntu20 systemd[1]: Started LSB: lsyncd daemon init script.
    

    You can check the Lsyncd log file for more details as shown below:

    tail -f /var/log/lsyncd/lsyncd.log
    

    You should see the following output:

    /lsyncd/lsyncd.conf.lua
    Fri May  1 03:30:57 2020 Normal: Finished a list after exitcode: 0
    Fri May  1 03:31:20 2020 Normal: --- Startup, daemonizing ---
    Fri May  1 03:31:20 2020 Normal: recursive startup rsync: /etc/ -> /mnt/
    Fri May  1 03:31:20 2020 Normal: Startup of /etc/ -> /mnt/ finished.
    

    You can also check the syncing status with the following command:

    tail -f /var/log/lsyncd/lsyncd.status
    

    You should be able to see the changes in the /mnt directory with the following command:

    ls /mnt/
    

    You should see that all the files and directories from the /etc directory are added to the /mnt directory:

    acpi                    dconf           hosts            logrotate.conf       newt                     rc2.d          subuid-
    adduser.conf            debconf.conf    hosts.allow      logrotate.d          nginx                    rc3.d          sudoers
    alternatives            debian_version  hosts.deny       lsb-release          nsswitch.conf            rc4.d          sudoers.d
    apache2                 default         init             lsyncd               ntp.conf                 rc5.d          sysctl.conf
    apparmor                deluser.conf    init.d           ltrace.conf          openal                   rc6.d          sysctl.d
    apparmor.d              depmod.d        initramfs-tools  lvm                  opt                      rcS.d          systemd
    apport                  dhcp            inputrc          machine-id           os-release               resolv.conf    terminfo
    apt                     dnsmasq.d       insserv.conf.d   magic                overlayroot.conf         rmt            timezone
    at.deny                 docker          iproute2         magic.mime           PackageKit               rpc            tmpfiles.d
    bash.bashrc             dpkg            iscsi            mailcap              pam.conf                 rsyslog.conf   ubuntu-advantage
    bash_completion         e2scrub.conf    issue            mailcap.order        pam.d                    rsyslog.d      ucf.conf
    bash_completion.d       environment     issue.net        manpath.config       passwd                   screenrc       udev
    bindresvport.blacklist  ethertypes      kernel           mdadm                passwd-                  securetty      ufw
    binfmt.d                fonts           kernel-img.conf  mime.types           perl                     security       update-manager
    byobu                   fstab           landscape        mke2fs.conf          php                      selinux        update-motd.d
    ca-certificates         fuse.conf       ldap             modprobe.d           pki                      sensors3.conf  update-notifier
    ca-certificates.conf    fwupd           ld.so.cache      modules              pm                       sensors.d      vdpau_wrapper.cfg
    calendar                gai.conf        ld.so.conf       modules-load.d       polkit-1                 services       vim
    console-setup           groff           ld.so.conf.d     mtab                 pollinate                shadow         vmware-tools
    cron.d                  group           legal            multipath            popularity-contest.conf  shadow-        vtrgb
    cron.daily              group-          letsencrypt      multipath.conf       profile                  shells         vulkan
    cron.hourly             grub.d          libaudit.conf    mysql                profile.d                skel           wgetrc
    cron.monthly            gshadow         libnl-3          nanorc               protocols                sos.conf       X11
    crontab                 gshadow-        locale.alias     netplan              pulse                    ssh            xattr.conf
    cron.weekly             gss             locale.gen       network              python3                  ssl            xdg
    cryptsetup-initramfs    hdparm.conf     localtime        networkd-dispatcher  python3.8                subgid         zsh_command_not_found
    crypttab                host.conf       logcheck         NetworkManager       rc0.d                    subgid-
    dbus-1                  hostname        login.defs       networks             rc1.d                    subuid
    
    
    Configure Lsyncd to Synchronize Remote Directories

    In this section, we will configure Lsyncd to synchronize /etc/ directory on the local system to the /opt/ directory on the remote system. Advertisements

    Before starting, you will need to setup SSH key-based authentication between the local system and remote server so that the local system can connect to the remote server without password.

    On the local system, run the following command to generate a public and private key:

    ssh-keygen -t rsa
    

    You should see the following output:

    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa
    Your public key has been saved in /root/.ssh/id_rsa.pub
    The key fingerprint is:
    SHA256:c7fhjjhAamFjlk6OkKPhsphMnTZQFutWbr5FnQKSJjE root@ubuntu20
    The key's randomart image is:
    +---[RSA 3072]----+
    | E ..            |
    |  ooo            |
    | oo= +           |
    |=.+ % o . .      |
    |[email protected] oSo. o    |
    |ooo=B o .o o o   |
    |=o.... o    o    |
    |+.    o .. o     |
    |     .  ... .    |
    +----[SHA256]-----+
    

    The above command will generate a private and public key inside ~/.ssh directory.

    Next, you will need to copy the public key to the remote server. You can copy it with the following command: Advertisements

    ssh-copy-id root@remote-server-ip
    

    You will be asked to provide the password of the remote root user as shown below:

    [email protected]'s password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh '[email protected]'"
    and check to make sure that only the key(s) you wanted were added.
    

    Once the user is authenticated, the public key will be appended to the remote user authorized_keys file and connection will be closed.

    Now, you should be able log in to the remote server without entering password.

    To test it just try to login to your remote server via SSH:

    ssh root@remote-server-ip
    

    If everything went well, you will be logged in immediately.

    Next, you will need to edit the Lsyncd configuration file and define the rsyncssh and target host variables:

    nano /etc/lsyncd/lsyncd.conf.lua
    

    Change the file as shown below:

    settings {
            logfile = "/var/log/lsyncd/lsyncd.log",
            statusFile = "/var/log/lsyncd/lsyncd.status",
       statusInterval = 20,
       nodaemon   = false
    }
    
    sync {
            default.rsyncssh,
            source = "/etc/",
            host = "remote-server-ip",
            targetdir = "/opt"
    }
    

    Save and close the file when you are finished. Then, restart the Lsyncd service to start the sync.

    systemctl restart lsyncd
    

    You can check the status of synchronization with the following command:

    tail -f /var/log/lsyncd/lsyncd.log
    

    You should see the following output: Advertisements

    Fri May  1 04:32:05 2020 Normal: --- Startup, daemonizing ---
    Fri May  1 04:32:05 2020 Normal: recursive startup rsync: /etc/ -> 45.58.38.21:/opt/
    Fri May  1 04:32:06 2020 Normal: Startup of "/etc/" finished: 0
    

    You should be able to see the changes in the /opt directory on the remote server with the following command:

    ls /opt
    

    You should see that all the files and directories from the /etc directory are added to the remote server's /opt directory:

    acpi                    dconf           hosts            logrotate.conf       newt                     rc2.d          subuid-
    adduser.conf            debconf.conf    hosts.allow      logrotate.d          nginx                    rc3.d          sudoers
    alternatives            debian_version  hosts.deny       lsb-release          nsswitch.conf            rc4.d          sudoers.d
    apache2                 default         init             lsyncd               ntp.conf                 rc5.d          sysctl.conf
    apparmor                deluser.conf    init.d           ltrace.conf          openal                   rc6.d          sysctl.d
    apparmor.d              depmod.d        initramfs-tools  lvm                  opt                      rcS.d          systemd
    apport                  dhcp            inputrc          machine-id           os-release               resolv.conf    terminfo
    apt                     dnsmasq.d       insserv.conf.d   magic                overlayroot.conf         rmt            timezone
    at.deny                 docker          iproute2         magic.mime           PackageKit               rpc            tmpfiles.d
    bash.bashrc             dpkg            iscsi            mailcap              pam.conf                 rsyslog.conf   ubuntu-advantage
    bash_completion         e2scrub.conf    issue            mailcap.order        pam.d                    rsyslog.d      ucf.conf
    bash_completion.d       environment     issue.net        manpath.config       passwd                   screenrc       udev
    bindresvport.blacklist  ethertypes      kernel           mdadm                passwd-                  securetty      ufw
    binfmt.d                fonts           kernel-img.conf  mime.types           perl                     security       update-manager
    byobu                   fstab           landscape        mke2fs.conf          php                      selinux        update-motd.d
    ca-certificates         fuse.conf       ldap             modprobe.d           pki                      sensors3.conf  update-notifier
    ca-certificates.conf    fwupd           ld.so.cache      modules              pm                       sensors.d      vdpau_wrapper.cfg
    calendar                gai.conf        ld.so.conf       modules-load.d       polkit-1                 services       vim
    console-setup           groff           ld.so.conf.d     mtab                 pollinate                shadow         vmware-tools
    cron.d                  group           legal            multipath            popularity-contest.conf  shadow-        vtrgb
    cron.daily              group-          letsencrypt      multipath.conf       profile                  shells         vulkan
    cron.hourly             grub.d          libaudit.conf    mysql                profile.d                skel           wgetrc
    cron.monthly            gshadow         libnl-3          nanorc               protocols                sos.conf       X11
    crontab                 gshadow-        locale.alias     netplan              pulse                    ssh            xattr.conf
    cron.weekly             gss             locale.gen       network              python3                  ssl            xdg
    cryptsetup-initramfs    hdparm.conf     localtime        networkd-dispatcher  python3.8                subgid         zsh_command_not_found
    crypttab                host.conf       logcheck         NetworkManager       rc0.d                    subgid-
    dbus-1                  hostname        login.defs       networks             rc1.d                    subuid
    
    
    Conclusion

    In the above guide, we learned how to install and configure Lsyncd for local synchronization and remote synchronization. You can now use Lsyncd in the production environment for backup purposes. Feel free to ask me if you have any questions.

    [May 06, 2020] Lsyncd - Live Syncing (Mirror) Daemon

    May 06, 2020 | axkibe.github.io

    Lsyncd - Live Syncing (Mirror) Daemon Description

    Lsyncd uses a filesystem event interface (inotify or fsevents) to watch for changes to local files and directories. Lsyncd collates these events for several seconds and then spawns one or more processes to synchronize the changes to a remote filesystem. The default synchronization method is rsync . Thus, Lsyncd is a light-weight live mirror solution. Lsyncd is comparatively easy to install and does not require new filesystems or block devices. Lysncd does not hamper local filesystem performance.

    As an alternative to rsync, Lsyncd can also push changes via rsync+ssh. Rsync+ssh allows for much more efficient synchronization when a file or direcotry is renamed or moved to a new location in the local tree. (In contrast, plain rsync performs a move by deleting the old file and then retransmitting the whole file.)

    Fine-grained customization can be achieved through the config file. Custom action configs can even be written from scratch in cascading layers ranging from shell scripts to code written in the Lua language . Thus, simple, powerful and flexible configurations are possible.

    Lsyncd 2.2.1 requires rsync >= 3.1 on all source and target machines.

    License: GPLv2 or any later GPL version.

    When to use

    Lsyncd is designed to synchronize a slowly changing local directory tree to a remote mirror. Lsyncd is especially useful to sync data from a secure area to a not-so-secure area.

    Other synchronization tools

    DRBD operates on block device level. This makes it useful for synchronizing systems that are under heavy load. Lsyncd on the other hand does not require you to change block devices and/or mount points, allows you to change uid/gid of the transferred files, separates the receiver through the one-way nature of rsync. DRBD is likely the better option if you are syncing databases.

    GlusterFS and BindFS use a FUSE-Filesystem to interject kernel/userspace filesystem events.

    Mirror is an asynchronous synchronisation tool that takes use of the inotify notifications much like Lsyncd. The main differences are: it is developed specifically for master-master use, thus running on a daemon on both systems, uses its own transportation layer instead of rsync and is Java instead of Lsyncd's C core with Lua scripting.

    Lsyncd usage examples
    lsyncd -rsync /home remotehost.org::share/
    

    This watches and rsyncs the local directory /home with all sub-directories and transfers them to 'remotehost' using the rsync-share 'share'.

    lsyncd -rsyncssh /home remotehost.org backup-home/
    

    This will also rsync/watch '/home', but it uses a ssh connection to make moves local on the remotehost instead of re-transmitting the moved file over the wire.

    Disclaimer

    Besides the usual disclaimer in the license, we want to specifically emphasize that neither the authors, nor any organization associated with the authors, can or will be held responsible for data-loss caused by possible malfunctions of Lsyncd.

    [May 06, 2020] Creating and managing partitions in Linux with parted Enable Sysadmin by Tyler Carrigan

    Apr 30, 2020 | www.redhat.com

    Red Hat Sysddmin

    Listing partitions with parted

    The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the parted command to start parted in interactive mode and list partitions. It will default to your first listed drive. You will then use the print command to display disk information.

    [root@rhel ~]# parted /dev/sdc
        GNU Parted 3.2
        Using /dev/sdc
        Welcome to GNU Parted! Type 'help' to view a list of commands.
        (parted) print                                                            
        Error: /dev/sdc: unrecognised disk label
        Model: ATA VBOX HARDDISK (scsi)                                           
        Disk /dev/sdc: 1074MB
        Sector size (logical/physical): 512B/512B
        Partition Table: unknown
        Disk Flags:
        (parted)
    

    Creating new partitions with parted

    Now that you can see what partitions are active on the system, you are going to add a new partition to /dev/sdc . You can see in the output above that there is no partition table for this partition, so add one by using the mklabel command. Then use mkpart to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.

    (parted) mklabel msdos                                                    
        (parted) mkpart                                                           
        Partition type?  primary/extended? primary                                
        File system type?  [ext2]? ext4                                           
        Start? 1                                                                  
        End? 50                                                                   
        (parted)                                                                  
        (parted) print                                                            
        Model: ATA VBOX HARDDISK (scsi)
        Disk /dev/sdc: 1074MB
        Sector size (logical/physical): 512B/512B
        Partition Table: msdos
        Disk Flags:
        
        Number  Start   End     Size    Type     File system  Flags
         1      1049kB  50.3MB  49.3MB  primary  ext4         lba
    

    Modifying existing partitions with parted

    Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the print command. You are then going to use the resizepart command to make the modifications.

    (parted) resizepart                                                       
        Partition number? 1                                                       
        End?  [50.3MB]? 100                                                       
            
        (parted) print                                                            
        Model: ATA VBOX HARDDISK (scsi)
        Disk /dev/sdc: 1074MB
        Sector size (logical/physical): 512B/512B
        Partition Table: msdos
        Disk Flags:
        
        Number  Start   End    Size    Type     File system  Flags
         1      1049kB  100MB  99.0MB  primary
    

    You can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the print command. You can now resize it back down to 50 MB. Keep in mind that shrinking a partition can cause data loss.

        (parted) resizepart                                                       
        Partition number? 1                                                       
        End?  [100MB]? 50                                                         
        Warning: Shrinking a partition can cause data loss, are you sure you want to
        continue?
        Yes/No? yes                                                               
        
        (parted) print
        Model: ATA VBOX HARDDISK (scsi)
        Disk /dev/sdc: 1074MB
        Sector size (logical/physical): 512B/512B
        Partition Table: msdos
        Disk Flags:
        
        Number  Start   End     Size    Type     File system  Flags
         1      1049kB  50.0MB  49.0MB  primary
    

    Removing partitions with parted

    Now, let's look at how to remove the partition you created at /dev/sdc1 by using the rm command inside of the parted suite. Again, you will need the partition number, which is found in the print output.

    NOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the rm command, it will delete the partition number you give it.

        (parted) rm 1                                                             
        (parted) print                                                            
        Model: ATA VBOX HARDDISK (scsi)
        Disk /dev/sdc: 1074MB
        Sector size (logical/physical): 512B/512B
        Partition Table: msdos
        Disk Flags:
        
        Number  Start  End  Size  Type  File system  Flags
    

    [May 06, 2020] How To Change Default Sudo Log File In Linux

    May 04, 2020 | www.ostechnix.com
    The sudo logs are kept in "/var/log/secure" file in RPM-based systems such as CentOS and Fedora.

    To set a dedicated sudo log file in CentOS 8, edit "/etc/sudoers" file using command:

    $ sudo visudo
    

    This command will open /etc/sudoers file in Vi editor. Press "i" to enter to insert mode and add the following line at the end:

    [...]
    Defaults syslog=local1
    

    Press ESC and type :wq to save and close.

    Next, edit "/etc/rsyslog.conf" file:

    $ sudo nano /etc/rsyslog.conf
    

    Add/modify the following lines (line number 46 and 47):

    [...]
    *.info;mail.none;authpriv.none;cron.none;local1.none   /var/log/messages
    local1.*                /var/log/sudo.log
    [...]
    

    Change Sudo Log File Location In CentOS

    Press CTRL+X followed by Y to save and close the file.

    Restart rsyslog to take effect the changes.

    $ sudo systemctl restart rsyslog
    

    From now on, all sudo attempts will be logged in /var/log/sudo.log file.

    $ sudo cat /var/log/sudo.log
    

    Sample output:

    May 3 17:13:26 centos8 sudo[20191]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/systemctl restart rsyslog
    May 3 17:13:35 centos8 sudo[20202]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/systemctl status rsyslog
    May 3 17:13:51 centos8 sudo[20206]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/yum update
    

    View sudo log files in CentOS

    [Apr 21, 2020] Real sysadmins don't sudo by David Both

    Apr 17, 2020 | www.redhat.com
    Or do they? This opinion piece from contributor David Both takes a look at when sudo makes sense, and when it does not. More Linux resources

    A few months ago, I read a very interesting article that contained some good information about a Linux feature that I wanted to learn more about. I won't tell you the name of the article, what it was about, or even the web site on which I read it, but the article just made me shudder.

    The reason I found this article so cringe-worthy is that it prefaced every command with the sudo command. The issue I have with this is that the article is allegedly for sysadmins, and real sysadmins don't use sudo in front of every command they issue. To do so is a gross misuse of the sudo command. I have written about this type of misuse in my book, "The Linux Philosophy for SysAdmins." The following is an excerpt from Chapter 19 of that book.

    In this article, we explore why and how the sudo tool is being misused and how to bypass the configuration that forces one to use sudo instead of working directly as root.

    sudo or not sudo

    Part of being a system administrator and using your favorite tools is to use the tools we have correctly and to have them available without any restrictions. In this case, I find that the sudo command is used in a manner for which it was never intended. I have a particular dislike for how the sudo facility is being used in some distributions, especially because it is employed to limit and restrict access by people doing the work of system administration to the tools they need to perform their duties.

    "[SysAdmins] don't use sudo."
    – Paul Venezia

    Venezia explains in his InfoWorld article that sudo is used as a crutch for sysadmins. He does not spend a lot of time defending this position or explaining it. He just states this as a fact. And I agree with him – for sysadmins. We don't need the training wheels in order to do our jobs. In fact, they get in the way.

    Some distros, such as Ubuntu, use the sudo command in a manner that is intended to make the use of commands that require elevated (root) privileges a little more difficult. In these distros, it is not possible to login directly as the root user so the sudo command is used to allow non-root users temporary access to root privileges. This is supposed to make the user a little more careful about issuing commands that need elevated privileges such as adding and deleting users, deleting files that don't belong to them, installing new software, and generally all of the tasks that are required to administer a modern Linux host. Forcing sysadmins to use the sudo command as a preface to other commands is supposed to make working with Linux safer.

    Using sudo in the manner it is by these distros is, in my opinion, a horrible and ineffective attempt to provide novice sysadmins with a false sense of security. It is completely ineffective at providing any level of protection. I can issue commands that are just as incorrect or damaging using sudo as I can when not using it. The distros that use sudo to anesthetize the sense of fear that we might issue an incorrect command are doing sysadmins a great disservice. There is no limit or restriction imposed by these distros on the commands that one might use with the sudo facility. There is no attempt to actually limit the damage that might be done by actually protecting the system from the users and the possibility that they might do something harmful – nor should there be.

    So let's be clear about this -- these distributions expect the user to perform all of the tasks of system administration. They lull the users -- who are really System Administrators -- into thinking that they are somehow protected from the effects of doing anything bad because they must take this restrictive extra step to enter their own password in order to run the commands.

    Bypass sudo

    Distributions that work like this usually lock the password for the root user (Ubuntu is one of these distros). This way no one can login as root and start working unencumbered. Let's look at how this works and then how to bypass it.

    Let me stipulate the setup here so that you can reproduce it if you wish. As an example, I installed Ubuntu 16.04 LTS1 in a VM using VirtualBox. During the installation, I created a non-root user, student, with a simple password for this experiment.

    Login as the user student and open a terminal session. Let's look at the entry for root in the /etc/shadow file, which is where the encrypted passwords are stored.

    student@machine1:~$ cat /etc/shadow
    cat: /etc/shadow: Permission denied
    

    Permission is denied so we cannot look at the /etc/shadow file . This is common to all distributions so that non-privileged users cannot see and access the encrypted passwords. That access would make it possible to use common hacking tools to crack those passwords so it is insecure to allow that.

    Now let's try to su – to root.

    student@machine1:~$ su -
    Password:
    su: Authentication failure
    

    This attempt to use the su command to elevate our user to root privilege fails because the root account has no password and is locked out. Let's use sudo to look at the /etc/shadow file.

    student@machine1:~$ sudo cat /etc/shadow
    [sudo] password for student: <enter the user password>
    root:!:17595:0:99999:7:::
    <snip>
    student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
    <snip>
    

    I have truncated the results to only show the entry for the root and student users. I have also shortened the encrypted password so that the entry will fit on a single line. The fields are separated by colons ( : ) and the second field is the password. Notice that the password field for root is a "bang," known to the rest of the world as an exclamation point ( ! ). This indicates that the account is locked and that it cannot be used.

    Now, all we need to do to use the root account as proper sysadmins is to set up a password for the root account.

    student@machine1:~$ sudo su -
    [sudo] password for student: <Enter password for student>
    root@machine1:~# passwd root
    Enter new UNIX password: <Enter new root password>
    Retype new UNIX password: <Re-enter new root password>
    passwd: password updated successfully
    root@machine1:~#
    

    Now we can login directly on a console as root or su – directly to root instead of having to use sudo for each command. Of course, we could just use sudo su – every time we want to login as root – but why bother?

    Please do not misunderstand me. Distributions like Ubuntu and their up- and down-stream relatives are perfectly fine and I have used several of them over the years. When using Ubuntu and related distros, one of the first things I do is set a root password so that I can login directly as root.

    Valid uses for sudo

    The sudo facility does have its uses. The real intent of sudo is to enable the root user to delegate to one or two non-root users, access to one or two specific privileged commands that they need on a regular basis. The reasoning behind this is that of the lazy sysadmin; allowing the users access to a command or two that requires elevated privileges and that they use constantly, many times per day, saves the SysAdmin a lot of requests from the users and eliminates the wait time that the users would otherwise experience. But most non-root users should never have full root access, just to the few commands that they need.

    I sometimes need non-root users to run programs that require root privileges. In cases like this, I set up one or two non-root users and authorize them to run that single command. The sudo facility also keeps a log of the user ID of each user that uses it. This might enable me to track down who made an error. That's all it does; it is not a magical protector.

    The sudo facility was never intended to be used as a gateway for commands issued by a sysadmin. It cannot check the validity of the command. It does not check to see if the user is doing something stupid. It does not make the system safe from users who have access to all of the commands on the system even if it is through a gateway that forces them to say "please" – That was never its intended purpose.

    "Unix never says please."
    – Rob Pike

    This quote about Unix is just as true about Linux as it is about Unix. We sysadmins login as root when we need to do work as root and we log out of our root sessions when we are done. Some days we stay logged in as root all day long but we always work as root when we need to. We never use sudo because it forces us to type more than necessary in order to run the commands we need to do our jobs. Neither Unix nor Linux asks us if we really want to do something, that is, it does not say "Please verify that you want to do this."

    Yes, I dislike the way some distros use the sudo command. Next time I will explore some valid use cases for sudo and how to configure it for these cases.

    [ Want to test your sysadmin skills? Take a skills assessment today. ]

    [Apr 12, 2020] logging - Change log file name of teraterm log - Stack Overflow

    Apr 12, 2020 | stackoverflow.com

    Change log file name of teraterm log Ask Question Asked 4 years, 8 months ago Active 10 months ago Viewed 5k times


    pmverma ,

    I would like to change the default log file name of teraterm terminal log. What I would like to do automatically create/append log in a file name like "loggedinhost-teraterm.log"

    I found following ini setting for log file. It also uses strftime to format log filename.

        ; Default Log file name. You can specify strftime format to here.
        LogDefaultName=teraterm "%d %b %Y" .log
        ; Default path to save the log file.
        LogDefaultPath=
        ; Auto start logging with default log file name.
        LogAutoStart=on
    

    I have modified it to include date.

    Is there any way to prefix hostname in logfile name

    Fox eg,

    myserver01-teraterm.log
    myserver02-teraterm.logfile
    myserver03-teraterm.log
    

    Romme ,

    I had the same issue, and was able to solve my problem by adding &h like below;

    ; Default Log file name. You can specify strftime format to here. LogDefaultName=teraterm &h %d %b %y.log ; Default path to save the log file. LogDefaultPath=C:\Users\Logs ; Auto start logging with default log file name. LogAutoStart=on

    > ,

    https://ttssh2.osdn.jp/manual/en/menu/setup-additional.html

    "Log" tab

    View log editor

    Specify the editor that is used for display log file

    Default log file name(strftime format)

    Specify default log file name. It can include a format of strftime.

    &h      Host name(or empty when not connecting)
    &p      TCP port number(or empty when not connecting, not TCP connection)
    &u      Logon user name
    %a      Abbreviated weekday name
    %A      Full weekday name
    %b      Abbreviated month name
    %B      Full month name
    %c      Date and time representation appropriate for locale
    %d      Day of month as decimal number (01 - 31)
    %H      Hour in 24-hour format (00 - 23)
    %I      Hour in 12-hour format (01 - 12)
    %j      Day of year as decimal number (001 - 366)
    %m      Month as decimal number (01 - 12)
    %M      Minute as decimal number (00 -  59)
    %p      Current locale's A.M./P.M. indicator for 12-hour clock
    %S      Second as decimal number (00 - 59)
    %U      Week of year as decimal number, with Sunday as first day of week (00 - 53)
    %w      Weekday as decimal number (0 - 6; Sunday is 0)
    %W      Week of year as decimal number, with Monday as first day of week (00 - 53)
    %x      Date representation for current locale
    %X      Time representation for current locale
    %y      Year without century, as decimal number (00 - 99)
    %Y      Year with century, as decimal number
    %z, %Z  Either the time-zone name or time zone abbreviation, depending on registry settings;
        no characters if time zone is unknown
    %%      Percent sign
    

    example:

    teraterm-&h-%Y%m%d_%H_%M_%S.log

    [Apr 08, 2020] How to Use rsync and scp Commands in Reverse Mode on Linux

    Highly recommended!
    Apr 08, 2020 | www.2daygeek.com

    by Magesh Maruthamuthu · Last Updated: April 2, 2020

    Typically, you use the rsync command or scp command to copy files from one server to another.

    But if you want to perform these commands in reverse mode, how do you do that?

    Have you tried this? Have you had a chance to do this?

    Why would you want to do that? Under what circumstances should you use it?

    Scenario-1: When you copy a file from "Server-1" to "Server-2" , you must use the rsync or scp command in the standard way.

    Also, you can do from "Server-2" to "Server-1" if you need to.

    To do so, you must have a password for both systems.

    Scenario-2: You have a jump server and only enabled the ssh key-based authentication to access other servers (you do not have the password for that).

    In this case you are only allowed to access the servers from the jump server and you cannot access the jump server from other servers.

    In this scenario, if you want to copy some files from other servers to the jump server, how do you do that?

    Yes, you can do this using the reverse mode of the scp or rsync command.

    General Syntax of the rsync and scp Command:

    The following is a general syntax of the rsync and scp commands.

    rsync: rsync [Options] [Source_Location] [Destination_Location]
    
    scp: scp [Options] [Source_Location] [Destination_Location]
    
    General syntax of the reverse rsync and scp command:

    The general syntax of the reverse rsync and scp commands as follows.

    rsync: rsync [Options] [Destination_Location] [Source_Location]
    
    scp: scp [Options] [Destination_Location] [Source_Location]
    
    1) How to Use rsync Command in Reverse Mode with Standard Port

    We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse rsync command with the standard port.

    me width=

    me width=

    # rsync -avz -e ssh [email protected]:/root/2daygeek.tar.gz /root/backup
    The authenticity of host 'jump.2daygeek.com (jump.2daygeek.com)' can't be established.
    RSA key fingerprint is 6f:ad:07:15:65:bf:54:a6:8c:5f:c4:3b:99:e5:2d:34.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'jump.2daygeek.com' (RSA) to the list of known hosts.
    [email protected]'s password:
    receiving file list ... done
    2daygeek.tar.gz
    
    sent 42 bytes  received 23134545 bytes  1186389.08 bytes/sec
    total size is 23126674  speedup is 1.00
    

    You can see the file copied using the ls command .

    # ls -h /root/backup/*.tar.gz
    total 125M
    -rw-------   1 root root  23M Oct 26 01:00 2daygeek.tar.gz
    
    2) How to Use rsync Command in Reverse Mode with Non-Standard Port

    We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse rsync command with the non-standard port.

    # rsync -avz -e "ssh -p 11021" [email protected]:/root/backup/weekly/2daygeek.tar.gz /root/backup
    The authenticity of host '[jump.2daygeek.com]:11021 ([jump.2daygeek.com]:11021)' can't be established.
    RSA key fingerprint is 9c:ab:c0:5b:3b:44:80:e3:db:69:5b:22:ba:d6:f1:c9.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '[jump.2daygeek.com]:11021' (RSA) to the list of known hosts.
    [email protected]'s password:
    receiving incremental file list
    2daygeek.tar.gz
    
    sent 30 bytes  received 23134526 bytes  1028202.49 bytes/sec
    total size is 23126674  speedup is 1.00
    
    3) How to Use scp Command in Reverse Mode on Linux

    We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse scp command.

    # scp [email protected]:/root/backup/weekly/2daygeek.tar.gz /root/backup
    

    me width=

    me width=

    Share this:

    Tags: file copy Linux Linux Backup Reverse rsync Reverse scp rsync Command Examples scp Command Examples

    Magesh Maruthamuthu

    [Mar 23, 2020] Pscp - Transfer-Copy Files to Multiple Linux Servers Using Single Shell by Ravi Saive

    Dec 05, 2015 | www.tecmint.com

    Pscp utility allows you to transfer/copy files to multiple remote Linux servers using single terminal with one single command, this tool is a part of Pssh (Parallel SSH Tools), which provides parallel versions of OpenSSH and other similar tools such as:

    1. pscp – is utility for copying files in parallel to a number of hosts.
    2. prsync – is a utility for efficiently copying files to multiple hosts in parallel.
    3. pnuke – it helps to kills processes on multiple remote hosts in parallel.
    4. pslurp – it helps to copy files from multiple remote hosts to a central host in parallel.
    When working in a network environment where there are multiple hosts on the network, a System Administrator may find these tools listed above very useful. When working in a network environment where there are multiple hosts on the network, a System Administrator may find these tools listed above very useful.

    Pscp – Copy Files to Multiple Linux Servers In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. Pscp – Copy Files to Multiple Linux Servers In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article.

    1. How to Install Pssh Tool to Execute Commands on Multiple Linux Servers
    Almost all the different options used with these tools are the same except for few that are related to the specific functionality of a given utility. Almost all the different options used with these tools are the same except for few that are related to the specific functionality of a given utility. How to Use Pscp to Transfer/Copy Files to Multiple Linux Servers While using pscp you need to create a separate file that includes the number of Linux server IP address and SSH port number that you need to connect to the server. While using pscp you need to create a separate file that includes the number of Linux server IP address and SSH port number that you need to connect to the server. Copy Files to Multiple Linux Servers Let's create a new file called " myscphosts.txt " and add the list of Linux hosts IP address and SSH port (default 22 ) number as shown. Let's create a new file called " myscphosts.txt " and add the list of Linux hosts IP address and SSH port (default 22 ) number as shown.
    192.168.0.3:22
    192.168.0.9:22
    
    Once you've added hosts to the file, it's time to copy files from local machine to multiple Linux hosts under /tmp directory with the help of following command. Once you've added hosts to the file, it's time to copy files from local machine to multiple Linux hosts under /tmp directory with the help of following command.
    # pscp -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
    OR
    # pscp.pssh -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
    
    Sample Output
    Warning: do not enter your password if anyone else has superuser
    privileges or access to your account.
    Password: 
    [1] 17:48:25 [SUCCESS] 192.168.0.3:22
    [2] 17:48:35 [SUCCESS] 192.168.0.9:22
    
    Explanation about the options used in the above command. Explanation about the options used in the above command.
    1. -h switch used to read a hosts from a given file and location.
    2. -l switch reads a default username on all hosts that do not define a specific user.
    3. -A switch tells pscp ask for a password and send to ssh.
    4. -v switch is used to run pscp in verbose mode.
    Copy Directories to Multiple Linux Servers If you want to copy entire directory use -r option, which will recursively copy entire directories as shown. If you want to copy entire directory use -r option, which will recursively copy entire directories as shown.
    # pscp -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
    OR
    # pscp.pssh -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
    
    Sample Output
    Warning: do not enter your password if anyone else has superuser
    privileges or access to your account.
    Password: 
    [1] 17:48:25 [SUCCESS] 192.168.0.3:22
    [2] 17:48:35 [SUCCESS] 192.168.0.9:22
    

    You can view the manual entry page for the pscp or use pscp --help command to seek for help.

    1. Ashwini R says: January 24, 2019 at 7:13 pm

      It didn't work for me as well. I can get into the machine through same ip and port as I've inserted into hosts.txt file. Still i get the below messages:

      [root@node1 ~]# pscp -h myscphosts.txt root -Av LoadKafkaRN.jar /home/
      [1] 13:37:42 [FAILURE] 173.37.29.85:22 Exited with error code 1
      [2] 13:37:42 [FAILURE] 173.37.29.2:22 Exited with error code 1
      [3] 13:37:42 [FAILURE] 173.37.28.176:22 Exited with error code 1
      [4] 13:37:42 [FAILURE] 173.37.28.121:22 Exited with error code 1
      
    2. Ankit Tiwari says: November 28, 2016 at 11:26 am

      Hi,

      I am following this tutorial to copy a file to multiple system but its giving error. The code i am using is

      pscp -h myhost.txt -l zabbix -Av show-image-1920×1080.jpg /home/zabbix/

      but it gives error

      [1] 11:18:50 [FAILURE] 192.168.0.244:22 Exited with error code 1

      • Ravi Saive says: November 28, 2016 at 1:00 pm

        @Ankit,

        Have you placed correct remote SSH host IP address and port number in the myscphosts.txt file? please confirm and add correct values and then try again..

      jHz says: September 7, 2016 at 7:18 am

      Hi,

      I am trying to copy one file from 30 hosts to one central computer by following you article but no success.
      I am using pscp command for this purpose:

      pscp -h hosts.txt /camera/1.jpg /camera/1.jpg

      where camera directory has been created already in which 1.jpg exists. It always give me error:

      Exited with error code 1

      I have also tried pscp command to copy file from one host to server:

      pscp -H "192.168.0.101" /camera/1.jpg /camera/1.jpg

      but it also returned me with the same error.

      Any help will be much appreciated.
      Thanks in advance.

    [Mar 23, 2020] Copy Specific File Types While Keeping Directory Structure In Linux by sk

    I think this approach is way too complex. A simpler and more reliable approach is first to create directory structure and then as the second statge to copy files.
    Use of cp command optionis interesting though
    Notable quotes:
    "... create the intermediate parent directories if needed to preserve the parent directory structure. ..."
    Mar 19, 2020 | www.ostechnix.com

    [Mar 23, 2020] How to setup nrpe for client side monitoring - LinuxConfig.org

    Mar 23, 2020 | linuxconfig.org

    In this tutorial you will learn:

    [Mar 12, 2020] 7 tips to speed up your Linux command line navigation Enable Sysadmin

    Mar 12, 2020 | www.redhat.com

    A bonus shortcut

    You can use the keyboard combination, Alt+. , to repeat the last argument.

    Note: The shortcut is Alt+. (dot).

    $ mkdir /path/to/mydir
    
    $ cd Alt.
    

    You are now in the /path/to/mydir directory.

    [Mar 05, 2020] Using Ctags with MC

    Mar 05, 2020 | frankhesse.wordpress.com

    the Midnight Commander's built-in editor turned out to be. Below is one of the features of mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through the code.

    Code Navigation
    Training
    Support for this functionality appeared in mcedit from version 4.7.0-pre1.
    To use it, you need to index the directory with the project using the ctags or etags utility, for this you need to run the following commands:

    $ cd /home/user/projects/myproj
    $ find . -type f -name "*.[ch]" | etags -lc --declarations -

    or
    $ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e -L-

    ')

    me marginwidth=


    After the utility completes, a TAGS file will appear in the root directory of our project, which mcedit will use.
    Well, practically all that needs to be done in order for mcedit to find the definition of the functions of variables or properties of the object under study.

    Using
    Imagine that we need to determine the place where the definition of the locked property of an edit object is located in some source code of a rather large project.


    /* Succesful, so unlock both files */
    if (different_filename) {
    if (save_lock)
    edit_unlock_file (exp);
    if (edit->locked)
    edit->locked = edit_unlock_file (edit->filename);
    } else {
    if (edit->locked || save_lock)
    edit->locked = edit_unlock_file (edit->filename);
    }

    me marginwidth=

    To do this, put the cursor at the end of the word locked and press alt + enter , a list of possible options appears, as in the screenshot below.
    image

    After selecting the desired option, we get to the line with the definition.

    [Mar 05, 2020] How to switch the editor in mc (midnight commander) from nano to mcedit?

    Jan 01, 2014 | askubuntu.com

    Ask Question Asked 9 years, 2 months ago Active 6 months ago Viewed 123k times

    https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html


    sdu ,

    Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the internal mc editor (mcedit)?

    Isaiah ,

    Press the following keys in order, one at a time:
    1. F9 Activates the top menu.
    2. o Selects the Option menu.
    3. c Opens the configuration dialog.
    4. i Toggles the use internal edit option.
    5. s Saves your preferences.

    Hurnst , 2014-06-21 02:34:51

    Run MC as usual. On the command line right above the bottom row of menu selections type select-editor . This should open a menu with a list of all of your installed editors. This is working for me on all my current linux machines.

    , 2010-12-09 18:07:18

    You can also change the standard editor. Open a terminal and type this command:
    sudo update-alternatives --config editor
    

    You will get an list of the installed editors on your system, and you can chose your favorite.

    AntonioK , 2015-01-27 07:06:33

    If you want to leave mc and system settings as it is now, you may just run it like
    $ EDITOR=mcedit
    

    > ,

    Open Midnight Commander, go to Options -> Configuration and check "use internal editor" Hit save and you are done.

    [Mar 05, 2020] How to change your hostname in Linux Enable Sysadmin

    Notable quotes:
    "... pretty ..."
    "... transient ..."
    "... Want to try out Red Hat Enterprise Linux? Download it now for free. ..."
    Mar 05, 2020 | www.redhat.com

    How to change your hostname in Linux What's in a name, you ask? Everything. It's how other systems, services, and users "see" your system.

    Posted March 3, 2020 | by Tyler Carrigan (Red Hat)

    Image
    Image by Pixabay
    More Linux resources

    Your hostname is a vital piece of system information that you need to keep track of as a system administrator. Hostnames are the designations by which we separate systems into easily recognizable assets. This information is especially important to make a note of when working on a remotely managed system. I have experienced multiple instances of companies changing the hostnames or IPs of storage servers and then wondering why their data replication broke. There are many ways to change your hostname in Linux; however, in this article, I'll focus on changing your name as viewed by the network (specifically in Red Hat Enterprise Linux and Fedora).

    Background

    A quick bit of background. Before the invention of DNS, your computer's hostname was managed through the HOSTS file located at /etc/hosts . Anytime that a new computer was connected to your local network, all other computers on the network needed to add the new machine into the /etc/hosts file in order to communicate over the network. As this method did not scale with the transition into the world wide web era, DNS was a clear way forward. With DNS configured, your systems are smart enough to translate unique IPs into hostnames and back again, ensuring that there is little confusion in web communications.

    Modern Linux systems have three different types of hostnames configured. To minimize confusion, I list them here and provide basic information on each as well as a personal best practice:

    It is recommended to pick a pretty hostname that is unique and not easily confused with other systems. Allow the transient and static names to be variations on the pretty, and you will be good to go in most circumstances.

    Working with hostnames

    Now, let's look at how to view your current hostname. The most basic command used to see this information is hostname -f . This command displays the system's fully qualified domain name (FQDN). To relate back to the three types of hostnames, this is your transient hostname. A better way, at least in terms of the information provided, is to use the systemd command hostnamectl to view your transient hostname and other system information:

    Image

    Before moving on from the hostname command, I'll show you how to use it to change your transient hostname. Using hostname <x> (where x is the new hostname), you can change your network name quickly, but be careful. I once changed the hostname of a customer's server by accident while trying to view it. That was a small but painful error that I overlooked for several hours. You can see that process below:

    Image

    It is also possible to use the hostnamectl command to change your hostname. This command, in conjunction with the right flags, can be used to alter all three types of hostnames. As stated previously, for the purposes of this article, our focus is on the transient hostname. The command and its output look something like this:

    Image

    The final method to look at is the sysctl command. This command allows you to change the kernel parameter for your transient name without having to reboot the system. That method looks something like this:

    Image GNOME tip

    Using GNOME, you can go to Settings -> Details to view and change the static and pretty hostnames. See below:

    Image Wrapping up

    I hope that you found this information useful as a quick and easy way to manipulate your machine's network-visible hostname. Remember to always be careful when changing system hostnames, especially in enterprise environments, and to document changes as they are made.

    Want to try out Red Hat Enterprise Linux? Download it now for free. Topics: Linux Tyler Carrigan Tyler is a community manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. More about me

    [Mar 05, 2020] Micro data center

    Mar 05, 2020 | en.wikipedia.org

    A micro data center ( MDC ) is a smaller or containerized (modular) data center architecture that is designed for computer workloads not requiring traditional facilities. Whereas the size may vary from rack to container, a micro data center may include fewer than four servers in a single 19-inch rack. It may come with built-in security systems, cooling systems, and fire protection. Typically there are standalone rack-level systems containing all the components of a 'traditional' data center, [1] including in-rack cooling, power supply, power backup, security, fire and suppression. Designs exist where energy is conserved by means of temperature chaining , in combination with liquid cooling. [2]

    In mid-2017, technology introduced by the DOME project was demonstrated enabling 64 high-performance servers, storage, networking, power and cooling to be integrated in a 2U 19" rack-unit. This packaging, sometimes called 'datacenter-in-a-box' allows deployments in spaces where traditional data centers do not fit, such as factory floors ( IOT ) and dense city centers, especially for edge-computing and edge-analytics.

    MDCs are typically portable and provide plug and play features. They can be rapidly deployed indoors or outdoors, in remote locations, for a branch office, or for temporary use in high-risk zones. [3] They enable distributed workloads , minimizing downtime and increasing speed of response.

    [Mar 05, 2020] What's next for data centers Think micro data centers by Larry Dignan

    Apr 14, 2019 | www.zdnet.com

    A micro data center, a mini version of a data center rack, could work as edge computing takes hold in various industries. Here's a look at the moving parts behind the micro data center concept.

    [Mar 05, 2020] The 3-2-1 rule for backups says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site

    Mar 05, 2020 | www.networkworld.com

    As the number of places where we store data increases, the basic concept of what is referred to as the 3-2-1 rule often gets forgotten. This is a problem, because the 3-2-1 rule is easily one of the most foundational concepts for designing . It's important to understand why the rule was created, and how it's currently being interpreted in an increasingly tapeless world.

    What is the 3-2-1 rule for backup?

    The 3-2-1 rule says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site. Let's take a look at each of the three elements and what it addresses.

    Mind the air gap

    An air gap is a way of securing a copy of data by placing it on a machine on a network that is physically separate from the data it is backing up. It literally means there is a gap of air between the primary and the backup. This air gap accomplishes more than simple disaster recovery; it is also very useful for protecting against hackers.

    If all backups are accessible via the same computers that might be attacked, it is possible that a hacker could use a compromised server to attack your backup server. By separating the backup from the primary via an air gap, you make it harder for a hacker to pull that off. It's still not impossible, just harder.

    Everyone wants an air gap. The discussion these days is how to accomplish an air gap without using tapes.Back in the days of tape backup, it was easy to provide an air gap. You made a backup copy of your data and put it in a box, then you handed it to an Iron Mountain driver. Instantly, there was a gap of air between your primary and your backup. It was close to impossible for a hacker to attack both the primary and the backup.

    That is not to say it was impossible; it just made it harder. For hackers to attack your secondary copy, they needed to resort to a physical attack via social engineering. You might think that tapes stored in an off-site storage facility would be impervious to a physical attack via social engineering, but that is definitely not the case. (I have personally participated in white hat attacks of off-site storage facilities, successfully penetrated them and been left unattended with other people's backups.) Most hackers don't resort to physical attacks because they are just too risky, so air-gapping backups greatly reduces the risk that they will be compromised.

    Faulty 3-2-1 implementations

    Many things that pass for backup systems now do not pass even the most liberal interpretation of the 3-2-1 rule. A perfect example of this would be various cloud-based services that store the backups on the same servers and the same storage facility that they are protecting, ignoring the "2" and the "1" in this important rule.

    [Mar 05, 2020] Cloud computing More costly, complicated and frustrating than expected by Daphne Leprince-Ringuet

    Highly recommended!
    Costs estimate in optimistic spreadsheets and cost in actual life for large scale moves tot he cloud are very different. Now companies that jumped into cloud bandwagon discover that saving are illusionary and control over infrastructure is difficult. As well as cloud provider now control their future.
    Notable quotes:
    "... On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises. ..."
    "... Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. ..."
    "... The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation. ..."
    "... Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs. ..."
    "... As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ..."
    Feb 27, 2020 | www.zdnet.com

    Cloud computing More costly, complicated and frustrating than expected - but still essential ZDNet

    A new report by Capita shows that UK businesses are growing disillusioned by their move to the cloud. It might be because they are focusing too much on the wrong goals. Migrating to the cloud seems to be on every CIO's to-do list these days. But despite the hype, almost 60% of UK businesses think that cloud has over-promised and under-delivered, according to a report commissioned by consulting company Capita.

    The research surveyed 200 IT decision-makers in the UK, and found that an overwhelming nine in ten respondents admitted that cloud migration has been delayed in their organisation due to "unforeseen factors".

    On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises.

    But with organisations setting aside only one year to prepare for migration, which the report described as "less than adequate planning time," it is no surprise that most companies have encountered stumbling blocks on their journey to the cloud.

    Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. "

    Four years later, in fact, less than half (45%) of the companies' workloads and applications have successfully migrated, according to Capita. A meager 5% of respondents reported that they had not experienced any challenge in cloud migration; but their fellow IT leaders blamed security issues and the lack of internal skills as the main obstacles they have had to tackle so far.

    Half of respondents said that they had to re-architect more workloads than expected to optimise them for the cloud. Afghan noted that many businesses have adopted a "lift and shift" approach, taking everything they were storing on premises and shifting it into the public cloud. "Except in some cases, you need to re-architect the application," said Afghan, "and now it's catching up with organisations."

    The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation.

    Infrastructure, however, is not the only cost of moving to the cloud. IDC analysed the overall spending on cloud services, and predicted that investments will reach $500 billion (£388.4 billion) globally by 2023. Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs.

    Afghan said: "From speaking to clients, it is pretty clear that cloud expense is one of their chief concerns. The main thing on their minds right now is how to control that spend." His response to them, he continued, is better planning. "If you decide to move an application in the cloud, make sure you architect it so that you get the best return on investment," he argued. "And then monitor it. The cloud is dynamic - it's not a one-off event."

    Capita's research did found that IT leaders still have faith in the cloud, with the majority (86%) of respondents agreeing that the benefits of the cloud will outweigh its downsides. But on the other hand, only a third of organisations said that labour and logistical costs have decreased since migrating; and a minority (16%) said they were "extremely satisfied" with the move.

    "Most organisations have not yet seen the full benefits or transformative potential of their cloud investments," noted the report.

    As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ...

    Cloud Cloud computing: Spending is breaking records, Microsoft Azure slowly closes the gap on AWS

    [Mar 05, 2020] How to tell if you're using a bash builtin in Linux

    Mar 05, 2020 | www.networkworld.com

    One quick way to determine whether the command you are using is a bash built-in or not is to use the command "command". Yes, the command is called "command". Try it with a -V (capital V) option like this:

    $ command -V command
    command is a shell builtin
    $ command -V echo
    echo is a shell builtin
    $ command -V date
    date is hashed (/bin/date)
    

    When you see a "command is hashed" message like the one above, that means that the command has been put into a hash table for quicker lookup.

    ... ... ... How to tell what shell you're currently using

    If you switch shells you can't depend on $SHELL to tell you what shell you're currently using because $SHELL is just an environment variable that is set when you log in and doesn't necessarily reflect your current shell. Try ps -p $$ instead as shown in these examples:

    $ ps -p $$
      PID TTY          TIME CMD
    18340 pts/0    00:00:00 bash    <==
    $ /bin/dash
    $ ps -p $$
      PID TTY          TIME CMD
    19517 pts/0    00:00:00 dash    <==
    

    Built-ins are extremely useful and give each shell a lot of its character. If you use some particular shell all of the time, it's easy to lose track of which commands are part of your shell and which are not.

    Differentiating a shell built-in from a Linux executable requires only a little extra effort.

    [Mar 05, 2020] Bash IDE - Visual Studio Marketplace

    Notable quotes:
    "... all your shell scripts ..."
    Mar 05, 2020 | marketplace.visualstudio.com
    Bash IDE

    Visual Studio Code extension utilizing the bash language server , that is based on Tree Sitter and its grammar for Bash and supports explainshell integration.

    Features Configuration

    To get documentation for flags on hover (thanks to explainshell), run the explainshell Docker container :

    docker run --rm --name bash-explainshell -p 5000:5000 chrismwendt/codeintel-bash-with-explainshell
    

    And add this to your VS Code settings:

        "bashIde.explainshellEndpoint": "http://localhost:5000",
    

    For security reasons, it defaults to "" , which disables explainshell integration. When set, this extension will send requests to the endpoint and displays documentation for flags.

    Once https://github.com/idank/explainshell/pull/125 is merged, it would be possible to set this to "https://explainshell.com" , however doing this is not recommended as it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running Docker image.

    [Mar 04, 2020] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

    Jan 01, 2019 | stackoverflow.com

    A command-line HTML pretty-printer: Making messy HTML readable [closed] Ask Question Asked 10 years, 1 month ago Active 10 months ago Viewed 51k times


    knorv ,

    Closed. This question is off-topic . It is not currently accepting answers.

    jonjbar ,

    Have a look at the HTML Tidy Project: http://www.html-tidy.org/

    The granddaddy of HTML tools, with support for modern standards.

    There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

    Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

    For your needs, here is the command line to call Tidy:

    tidy inputfile.html
    

    Paul Brit ,

    Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly installed.
    brew install tidy-html5
    

    Original reply:

    Tidy from OS X doesn't support HTML5 . But there is experimental branch on Github which does.

    To get it:

     brew tap homebrew/dupes
     brew install tidy --HEAD
     brew untap homebrew/dupes
    

    That's it! Have fun!

    Boris , 2019-11-16 01:27:35

    Error: No available formula with the name "tidy" . brew install tidy-html5 works. – Pysis Apr 4 '17 at 13:34

    [Feb 29, 2020] files - How to get over device or resource busy

    Jan 01, 2011 | unix.stackexchange.com

    ripper234 , 2011-04-13 08:51:26

    I tried to rm -rf a folder, and got "device or resource busy".

    In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks)

    camh , 2011-04-13 09:22:46

    The tool you want is lsof , which stands for list open files .

    It has a lot of options, so check the man page, but if you want to see all open files under a directory:

    lsof +D /path

    That will recurse through the filesystem under /path , so beware doing it on large directory trees.

    Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

    kip2 , 2014-04-03 01:24:22

    sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

    umount /path

    BillThor ,

    I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

    user73011 ,

    Here is the solution:
    1. Go into the directory and type ls -a
    2. You will find a .xyz file
    3. vi .xyz and look into what is the content of the file
    4. ps -ef | grep username
    5. You will see the .xyz content in the 8th column (last row)
    6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
    7. Now try to delete the folder or file.

    Choylton B. Higginbottom ,

    I had this same issue, built a one-liner starting with @camh recommendation:
    lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs kill -9
    

    The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used -9 on kill, others might have safer options.

    user5359531 ,

    I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

    My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

    This typically happens in directories where I am installing or compiling software libraries.

    gloriphobia , 2017-03-23 12:56:22

    I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !

    By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

    Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

    Hopefully this can help someone else who comes across this problem!

    bil , 2018-04-04 14:10:20

    Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this
    ps -ef | grep name-of-busy-dir
    

    Showed me the process and the PID (column two).

    sudo kill -15 pid-here
    

    fixed it.

    Prabhat Kumar Singh , 2017-08-01 08:07:36

    If you have the server accessible, Try

    Deleting that dir from the server

    Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

    I too had this problem where

    lsof +D path : gives no output

    ps -ef : gives no relevant information

    [Feb 28, 2020] linux - Convert a time span in seconds to formatted time in shell - Stack Overflow

    Jan 01, 2012 | stackoverflow.com

    Convert a time span in seconds to formatted time in shell Ask Question Asked 7 years, 3 months ago Active 2 years ago Viewed 43k times


    Darren , 2012-11-16 18:59:53

    I have a variable of $i which is seconds in a shell script, and I am trying to convert it to 24 HOUR HH:MM:SS. Is this possible in shell?

    sampson-chen , 2012-11-16 19:17:51

    Here's a fun hacky way to do exactly what you are looking for =)
    date -u -d @${i} +"%T"

    Explanation:

    glenn jackman ,

    Another approach: arithmetic
    i=6789
    ((sec=i%60, i/=60, min=i%60, hrs=i/60))
    timestamp=$(printf "%d:%02d:%02d" $hrs $min $sec)
    echo $timestamp
    

    produces 1:53:09

    Alan Tam , 2014-02-17 06:48:21

    The -d argument applies to date from coreutils (Linux) only.

    In BSD/OS X, use

    date -u -r $i +%T

    kossboss , 2015-01-07 13:43:36

    Here is my algo/script helpers on my site: http://ram.kossboss.com/seconds-to-split-time-convert/ I used this elogant algo from here: Convert seconds to hours, minutes, seconds
    convertsecs() {
     ((h=${1}/3600))
     ((m=(${1}%3600)/60))
     ((s=${1}%60))
     printf "%02d:%02d:%02d\n" $h $m $s
    }
    TIME1="36"
    TIME2="1036"
    TIME3="91925"
    
    echo $(convertsecs $TIME1)
    echo $(convertsecs $TIME2)
    echo $(convertsecs $TIME3)
    

    Example of my second to day, hour, minute, second converter:

    # convert seconds to day-hour:min:sec
    convertsecs2dhms() {
     ((d=${1}/(60*60*24)))
     ((h=(${1}%(60*60*24))/(60*60)))
     ((m=(${1}%(60*60))/60))
     ((s=${1}%60))
     printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
     # PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
     # printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
    }
    # setting test variables: testing some constant variables & evaluated variables
    TIME1="36"
    TIME2="1036"
    TIME3="91925"
    # one way to output results
    ((TIME4=$TIME3*2)) # 183850
    ((TIME5=$TIME3*$TIME1)) # 3309300
    ((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
    # outputting results: another way to show results (via echo & command substitution with         backticks)
    echo $TIME1 - `convertsecs2dhms $TIME1`
    echo $TIME2 - `convertsecs2dhms $TIME2`
    echo $TIME3 - `convertsecs2dhms $TIME3`
    echo $TIME4 - `convertsecs2dhms $TIME4`
    echo $TIME5 - `convertsecs2dhms $TIME5`
    echo $TIME6 - `convertsecs2dhms $TIME6`
    
    # OUTPUT WOULD BE LIKE THIS (If none pretty printf used): 
    # 36 - 00-00:00:36
    # 1036 - 00-00:17:16
    # 91925 - 01-01:32:05
    # 183850 - 02-03:04:10
    # 3309300 - 38-07:15:00
    # 8653231 - 100-03:40:31
    # OUTPUT WOULD BE LIKE THIS (If pretty printf used): 
    # 36 - 00d 00h 00m 36s
    # 1036 - 00d 00h 17m 16s
    # 91925 - 01d 01h 32m 05s
    # 183850 - 02d 03h 04m 10s
    # 3309300 - 38d 07h 15m 00s
    # 1000000000 - 11574d 01h 46m 40s
    

    Basile Starynkevitch ,

    If $i represents some date in second since the Epoch, you could display it with
      date -u -d @$i +%H:%M:%S

    but you seems to suppose that $i is an interval (e.g. some duration) not a date, and then I don't understand what you want.

    Shilv , 2016-11-24 09:18:57

    I use C shell, like this:
    #! /bin/csh -f
    
    set begDate_r = `date +%s`
    set endDate_r = `date +%s`
    
    set secs = `echo "$endDate_r - $begDate_r" | bc`
    set h = `echo $secs/3600 | bc`
    set m = `echo "$secs/60 - 60*$h" | bc`
    set s = `echo $secs%60 | bc`
    
    echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
    
    Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases date -d @"$i" +%T

    [Feb 22, 2020] How To Use Rsync to Sync Local and Remote Directories on a VPS by Justin Ellingwood

    Feb 22, 2020 | www.digitalocean.com

    ... ... ...

    Useful Options for Rsync


    Rsync provides many options for altering the default behavior of the utility. We have already discussed some of the more necessary flags.

    If you are transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z option:

    [Feb 18, 2020] Articles on Linux by Ken Hess

    Jul 13, 2019 | www.linuxtoday.com

    [Feb 18, 2020] Setup Local Yum Repository On CentOS 7

    Aug 27, 2014 | www.unixmen.com

    This tutorial describes how to setup a local Yum repository on CentOS 7 system. Also, the same steps should work on RHEL and Scientific Linux 7 systems too.

    If you have to install software, security updates and fixes often in multiple systems in your local network, then having a local repository is an efficient way. Because all required packages are downloaded over the fast LAN connection from your local server, so that it will save your Internet bandwidth and reduces your annual cost of Internet.

    In this tutorial, I use two systems as described below:

    Yum Server OS         : CentOS 7 (Minimal Install)
    Yum Server IP Address : 192.168.1.101
    Client OS             : CentOS 7 (Minimal Install)
    Client IP Address     : 192.168.1.102
    
    Prerequisites

    First, mount your CentOS 7 installation DVD. For example, let us mount the installation media on /mnt directory.

    mount /dev/cdrom /mnt/
    

    Now the CentOS installation DVD is mounted under /mnt directory. Next install vsftpd package and let the packages available over FTP to your local clients.

    To do that change to /mnt/Packages directory:

    cd /mnt/Packages/
    

    Now install vsftpd package:

    rpm -ivh vsftpd-3.0.2-9.el7.x86_64.rpm
    

    Enable and start vsftpd service:

    systemctl enable vsftpd
    systemctl start vsftpd
    

    We need a package called "createrepo" to create our local repository. So let us install it too.

    If you did a minimal CentOS installation, then you might need to install the following dependencies first:

    rpm -ivh libxml2-python-2.9.1-5.el7.x86_64.rpm 
    rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm 
    rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm
    

    Now install "createrepo" package:

    rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm
    
    Build Local Repository

    It's time to build our local repository. Create a storage directory to store all packages from CentOS DVD's.

    As I noted above, we are going to use a FTP server to serve all packages to client systems. So let us create a storage location in our FTP server pub directory.

    mkdir /var/ftp/pub/localrepo
    

    Now, copy all the files from CentOS DVD(s) i.e from /mnt/Packages/ directory to the "localrepo" directory:

    cp -ar /mnt/Packages/*.* /var/ftp/pub/localrepo/
    

    Again, mount the CentOS installation DVD 2 and copy all the files to /var/ftp/pub/localrepo directory.

    Once you copied all the files, create a repository file called "localrepo.repo" under /etc/yum.repos.d/ directory and add the following lines into the file. You can name this file as per your liking:

    vi /etc/yum.repos.d/localrepo.repo
    

    Add the following lines:

    [localrepo]
    name=Unixmen Repository
    baseurl=file:///var/ftp/pub/localrepo
    gpgcheck=0
    enabled=1
    

    Note: Use three slashes(///) in the baseurl.

    Now, start building local repository:

    createrepo -v /var/ftp/pub/localrepo/
    

    Now the repository building process will start.

    Sample Output:

    root@server:-mnt-Packages_002

    Now, list out the repositories using the following command:

    yum repolist
    

    Sample Output:

    repo id                                                                    repo name                                                                     status
    base/7/x86_64                                                              CentOS-7 - Base                                                               8,465
    extras/7/x86_64                                                            CentOS-7 - Extras                                                                30
    localrepo                                                                  Unixmen Repository                                                            3,538
    updates/7/x86_64                                                           CentOS-7 - Updates                                                              726
    

    Clean the Yum cache and update the repository lists:

    yum clean all
    yum update
    

    After creating the repository, disable or rename the existing repositories if you only want to install packages from the local repository itself.

    Alternatively, you can install packages only from the local repository by mentioning the repository as shown below.

    yum install --disablerepo="*" --enablerepo="localrepo" httpd
    

    Sample Output:

    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
    Resolving Dependencies
    --> Running transaction check
    ---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
    --> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Running transaction check
    ---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
    ---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
    ---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
    ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===============================================================================================================================================================
     Package                              Arch                            Version                                         Repository                          Size
    ===============================================================================================================================================================
    Installing:
     httpd                                x86_64                          2.4.6-17.el7.centos.1                           localrepo                          2.7 M
    Installing for dependencies:
     apr                                  x86_64                          1.4.8-3.el7                                     localrepo                          103 k
     apr-util                             x86_64                          1.5.2-6.el7                                     localrepo                           92 k
     httpd-tools                          x86_64                          2.4.6-17.el7.centos.1                           localrepo                           77 k
     mailcap                              noarch                          2.1.41-2.el7                                    localrepo                           31 k
    
    Transaction Summary
    ===============================================================================================================================================================
    Install  1 Package (+4 Dependent packages)
    
    Total download size: 3.0 M
    Installed size: 10 M
    Is this ok [y/d/N]:
    

    Disable Firewall And SELinux:

    As we are going to use the local repository only in our local area network, there is no need for firewall and SELinux. So, to reduce the complexity, I disabled both Firewalld and SELInux.

    To disable the Firewalld, enter the following commands:

    systemctl stop firewalld
    systemctl disable firewalld
    

    To disable SELinux, edit file /etc/sysconfig/selinux ,

    vi /etc/sysconfig/selinux
    

    Set SELINUX=disabled.

    [...]
    SELINUX=disabled
    [...]
    

    Reboot your server to take effect the changes.

    Client Side Configuration

    Now, go to your client systems. Create a new repository file as shown above under /etc/yum.repos.d/ directory.

    vi /etc/yum.repos.d/localrepo.repo
    

    and add the following contents:

    [localrepo]
    name=Unixmen Repository
    baseurl=ftp://192.168.1.101/pub/localrepo
    gpgcheck=0
    enabled=1
    

    Note: Use double slashes in the baseurl and 192.168.1.101 is yum server IP Address.

    Now, list out the repositories using the following command:

    yum repolist
    

    Clean the Yum cache and update the repository lists:

    yum clean all
    yum update
    

    Disable or rename the existing repositories if you only want to install packages from the server local repository itself.

    Alternatively, you can install packages from the local repository by mentioning the repository as shown below.

    yum install --disablerepo="*" --enablerepo="localrepo" httpd
    

    Sample Output:

    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
    Resolving Dependencies
    --> Running transaction check
    ---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
    --> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
    --> Running transaction check
    ---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
    ---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
    ---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
    ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package          Arch        Version                      Repository      Size
    ================================================================================
    Installing:
     httpd            x86_64      2.4.6-17.el7.centos.1        localrepo      2.7 M
    Installing for dependencies:
     apr              x86_64      1.4.8-3.el7                  localrepo      103 k
     apr-util         x86_64      1.5.2-6.el7                  localrepo       92 k
     httpd-tools      x86_64      2.4.6-17.el7.centos.1        localrepo       77 k
     mailcap          noarch      2.1.41-2.el7                 localrepo       31 k
    
    Transaction Summary
    ================================================================================
    Install  1 Package (+4 Dependent packages)
    
    Total download size: 3.0 M
    Installed size: 10 M
    Is this ok [y/d/N]: y
    Downloading packages:
    (1/5): apr-1.4.8-3.el7.x86_64.rpm                          | 103 kB   00:01     
    (2/5): apr-util-1.5.2-6.el7.x86_64.rpm                     |  92 kB   00:01     
    (3/5): httpd-tools-2.4.6-17.el7.centos.1.x86_64.rpm        |  77 kB   00:00     
    (4/5): httpd-2.4.6-17.el7.centos.1.x86_64.rpm              | 2.7 MB   00:00     
    (5/5): mailcap-2.1.41-2.el7.noarch.rpm                     |  31 kB   00:01     
    --------------------------------------------------------------------------------
    Total                                              1.0 MB/s | 3.0 MB  00:02     
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : apr-1.4.8-3.el7.x86_64                                       1/5 
      Installing : apr-util-1.5.2-6.el7.x86_64                                  2/5 
      Installing : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     3/5 
      Installing : mailcap-2.1.41-2.el7.noarch                                  4/5 
      Installing : httpd-2.4.6-17.el7.centos.1.x86_64                           5/5 
      Verifying  : mailcap-2.1.41-2.el7.noarch                                  1/5 
      Verifying  : httpd-2.4.6-17.el7.centos.1.x86_64                           2/5 
      Verifying  : apr-util-1.5.2-6.el7.x86_64                                  3/5 
      Verifying  : apr-1.4.8-3.el7.x86_64                                       4/5 
      Verifying  : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     5/5 
    
    Installed:
      httpd.x86_64 0:2.4.6-17.el7.centos.1                                          
    
    Dependency Installed:
      apr.x86_64 0:1.4.8-3.el7                      apr-util.x86_64 0:1.5.2-6.el7   
      httpd-tools.x86_64 0:2.4.6-17.el7.centos.1    mailcap.noarch 0:2.1.41-2.el7   
    
    Complete!
    

    That's it. Now, you will be able to install softwares from your server local repository.

    Cheers!

    [Feb 16, 2020] Recover deleted files in Debian with TestDisk

    Images deletes; see the original link for details
    Feb 16, 2020 | vitux.com

    ... ... ...

    You can verify if the utility is indeed installed on your system and also check its version number by using the following command:

    $ testdisk --version
    

    Or,

    $ testdisk -v
    

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png" alt="Check TestDisk version" width="734" height="216" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png 734w, https://vitux.com/wp-content/uploads/2019/10/word-image-64-300x88.png 300w" sizes="(max-width: 734px) 100vw, 734px" />

    Step 2: Run TestDisk and create a new testdisk.log file

    Use the following command in order to run the testdisk command line utility:

    $ sudo testdisk
    

    The output will give you a description of the utility. It will also let you create a testdisk.log file. This file will later include useful information about how and where your lost file was found, listed and resumed.

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png" alt="Using Testdisk" width="736" height="411" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-65-300x168.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

    The above output gives you three options about what to do with this file:

    Create: (recommended)- This option lets you create a new log file.

    Append: This option lets you append new information to already listed information in this file from any previous session.

    No Log: Choose this option if you do not want to record anything about the session for later use.

    Important: TestDisk is a pretty intelligent tool. It does know that many beginners will also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can select an option through the up and down arrow keys and then entering to make your choice.

    In the above output, I would opt for creating a new log file. The system might ask you the password for sudo at this point.

    Step 3: Select your recovery drive

    The utility will now display a list of drives attached to your system. In my case, it is showing my hard drive as it is the only storage device on my system.

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png" alt="Choose recovery drive" width="729" height="493" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png 729w, https://vitux.com/wp-content/uploads/2019/10/word-image-66-300x203.png 300w" sizes="(max-width: 729px) 100vw, 729px" />

    Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.

    Step 4: Select Partition Table Type of your Selected Drive

    Now that you have selected a drive, you need to specify its partition table type of your on the following screen:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png" alt="Choose partition table" width="736" height="433" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-67-300x176.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

    The utility will automatically highlight the correct choice. Press Enter to continue.

    If you are sure that the testdisk intelligence is incorrect, you can make the correct choice from the list and then hit Enter.

    Step 5: Select the 'Advanced' option for file recovery

    When you have specified the correct drive and its partition type, the following screen will appear:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png" alt="Advanced file recovery options" width="736" height="446" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-68-300x182.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

    Recovering lost files is only one of the features of testdisk, the utility offers much more than that. Through the options displayed in the above screenshot, you can select any of those features. But here we are interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit enter.

    In this utility if you reach a point you did not intend to, you can go back by using the q key.

    Step 6: Select the drive partition where you lost the file

    If your selected drive has multiple partitions, the following screen lets you choose the relevant one from them.

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

    I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the options shown at the bottom of the screen.

    This will list all the directories on your partition.

    Step 7: Browse to the directory from where you lost the file

    When the testdisk utility displays all the directories of your operating system, browse to the directory from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home directory. So I will browse to home:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png" alt="Select directory" width="733" height="458" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png 733w, https://vitux.com/wp-content/uploads/2019/10/word-image-70-300x187.png 300w" sizes="(max-width: 733px) 100vw, 733px" />

    My username (sana):

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png" alt="Choose user folder" width="735" height="449" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-71-300x183.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

    And then the Downloads folder:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png" alt="Choose downloads" width="738" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png 738w, https://vitux.com/wp-content/uploads/2019/10/word-image-72-300x185.png 300w" sizes="(max-width: 738px) 100vw, 738px" />

    Tip: You can use the left arrow to go back to the previous directory.

    When you have reached your required directory, you will see the deleted files in colored or highlighted form.

    And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this as I had to illustrate the whole process to you.

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png" alt="Highlighted files" width="735" height="498" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-73-300x203.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

    Step 8: Copy the deleted file to be restored

    By now, you must have found your lost file in the list. Use the C option to copy the selected file. This file will later be restored to the location you will specify in the next step:

    Step 9: Specify the location where the found file will be restored

    Now that we have copied the lost file that we have now found, the testdisk utility will display the following screen so that we can specify where to restore it.

    You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your desired location.

    I am specifically selecting the location from where I lost the file, my Downloads folder:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png" alt="Choose location to restore file" width="732" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png 732w, https://vitux.com/wp-content/uploads/2019/10/word-image-74-300x187.png 300w" sizes="(max-width: 732px) 100vw, 732px" />

    Step 10: Copy/restore the file to the selected location

    After making the selection about where you want to restore the file, click the C button. This will restore your file to that location:

    <img src="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png" alt="Restored file successfully" width="735" height="496" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-75-300x202.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

    See the text in green in the above screenshot? This is actually great news. Now my file is restored on the specified location.

    This might seem to be a slightly long process but it is definitely worth getting your lost file back. The restored file will most probably be in a locked state. This means that only an authorized user can access and open it.

    We all need this tool time and again, but if you want to delete it till you further need it you can do so through the following command:

    $ sudo apt-get remove testdisk
    

    You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!

    Recover deleted files in Debian with TestDisk Karim Buzdar February 11, 2020 Debian , Linux , Shell Market smarter with automated messaging tools. ads via Carbon Search About This Site Vitux.com aims to become a Linux compendium with lots of unique and up to date tutorials. Most Popular Copyright vitux.com

    [Feb 16, 2020] A List Of Useful Console Services For Linux Users by sk

    Images deletes; see the original link for details
    Feb 13, 2020 | www.ostechnix.com
    Cheatsheets for Linux/Unix commands

    You probably heard about cheat.sh . I use this service everyday! This is one of the useful service for all Linux users. It displays concise Linux command examples.

    For instance, to view the curl command cheatsheet , simply run the following command from your console:

    $ curl cheat.sh/curl
    

    It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you the cheatsheets of most Linux and unix commands in couple seconds.

    ls command cheatsheet:

    $ curl cheat.sh/ls
    

    find command cheatsheet:

    $ curl cheat.sh/find
    

    It is highly recommended tool!


    Recommended read:


    ... ... ...

    IP Address

    We can find the local ip address using ip command. But what about the public IP address? It is simple!

    To find your public IP address, just run the following commands from your Terminal:

    $ curl ipinfo.io/ip
    157.46.122.176
    
    $ curl eth0.me
    157.46.122.176
    
    $ curl checkip.amazonaws.com
    157.46.122.176
    
    $ curl icanhazip.com
    2409:4072:631a:c033:cc4b:4d25:e76c:9042
    

    There is also a console service to display the ip address in JSON format.

    $ curl httpbin.org/ip
    {
      "origin": "157.46.122.176"
    }
    

    ... ... ...

    Dictionary

    Want to know the meanig of an English word? Here is how you can get the meaning of a word gustatory

    $ curl 'dict://dict.org/d:gustatory'
    220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <[email protected]>
    250 ok
    150 1 definitions retrieved
    151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
    Gustatory \Gust"a*to*ry\, a.
    Pertaining to, or subservient to, the sense of taste; as, the
    gustatory nerve which supplies the front of the tongue.
    [1913 Webster]
    .
    250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
    221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
    
    Text sharing

    You can share texts via some console services. These text sharing services are often useful for sharing code.

    Here is an example.

    $ echo "Welcome To OSTechNix!" | curl -F 'f:1=<-' ix.io
    http://ix.io/2bCA
    

    The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser by navigating to the URL http://ix.io/2bCA

    Another example:

    $ echo "Welcome To OSTechNix!" | curl -F file=@- 0x0.st
    http://0x0.st/i-0G.txt
    
    File sharing

    Not just text, we can even share files to anyone using a console service called filepush .

    $ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100    72    0     0  100    72      0     54  0:00:01  0:00:01 --:--:--    54http://filepush.co/8x6h/ostechnix.txt
    100   110  100    38  100    72     27     53  0:00:01  0:00:01 --:--:--    81
    

    The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating to the link http://filepush.co/8x6h/ostechnix.txt

    Another text sharing console service is termbin :

    $ echo "Welcome To OSTechNix!" | nc termbin.com 9999
    

    There is also another console service named transfer.sh . But it doesn't work at the time of writing this guide.

    Browser

    There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using command:

    $ ssh brow.sh
    

    Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer the following guide for more details.

    Create QR codes for given string

    Do you want to create QR-codes for a given string? That's easy!

    $ curl qrenco.de/ostechnix
    

    Here is the QR code for "ostechnix" string.

    URL Shortners

    Want to shorten a long URLs shorter to make them easier to post or share with your friends? Use Tinyurl console service to shorten them:

    $ curl -s http://tinyurl.com/api-create.php?url=https://www.ostechnix.com/pigz-compress-and-decompress-files-in-parallel-in-linux/
    http://tinyurl.com/vkc5c5p
    

    [Feb 09, 2020] How To Install And Configure Chrony As NTP Client

    See also chrony Comparison of NTP implementations
    Another installation manual Steps to configure Chrony as NTP Server & Client (CentOS-RHEL 8)
    Feb 09, 2020 | www.2daygeek.com

    It can synchronize the system clock faster with better time accuracy and it can be very much useful for the systems which are not online all the time.

    Chronyd is smaller in size, it uses less system memory and it wakes up the CPU only when necessary, which is better for power saving.

    It can perform well even when the network is congested for longer periods of time.

    You can use any of the below commands to check Chrony status.

    To check chrony tracking status.

    # chronyc tracking
    
    Reference ID    : C0A80105 (CentOS7.2daygeek.com)
    Stratum         : 3
    Ref time (UTC)  : Thu Mar 28 05:57:27 2019
    System time     : 0.000002545 seconds slow of NTP time
    Last offset     : +0.001194361 seconds
    RMS offset      : 0.001194361 seconds
    Frequency       : 1.650 ppm fast
    Residual freq   : +184.101 ppm
    Skew            : 2.962 ppm
    Root delay      : 0.107966967 seconds
    Root dispersion : 1.060455322 seconds
    Update interval : 2.0 seconds
    Leap status     : Normal
    

    Run the sources command to displays information about the current time sources.

    # chronyc sources
    
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample               
    ===============================================================================
    ^* CentOS7.2daygeek.com          2   6    17    62    +36us[+1230us] +/- 1111ms
    

    [Feb 05, 2020] How to disable startup graphic in CentOS

    Feb 05, 2020 | forums.centos.org

    Post by neuronetv " 2014/08/20 22:24:51

    I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.

    <moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
    Thanks for any help.

    See also https://www.youtube.com/watch?v=oFl40XzlXp4

    [Feb 05, 2020] Disable startup graphic

    This is still a problem today... See also centOS 7 hung at "Starting Plymouth switch root service"
    Feb 05, 2020 | forums.centos.org
    disable startup graphic

    Post by neuronetv " 2014/08/20 22:24:51

    I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
    <moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
    Thanks for any help. Top
    User avatar TrevorH
    Forum Moderator
    Posts: 27492
    Joined: 2009/09/24 10:40:56
    Location: Brighton, UK
    Re: disable startup graphic

    Post by TrevorH " 2014/08/20 23:09:40

    The file to amend now is /boot/grub2/grub.cfg and also /etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o /boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg file directly though your changes will be wiped out next kernel install if you don't also edit the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
    CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
    Full time Geek, part time moderator. Use the FAQ Luke Top
    neuronetv
    Posts: 76
    Joined: 2012/01/08 21:53:07
    Re: disable startup graphic

    Post by neuronetv " 2014/08/21 13:12:45

    thanks for that, I did the edits and now the scroll is back. Top
    larryg
    Posts: 3
    Joined: 2014/07/17 04:48:28
    Re: disable startup graphic

    Post by larryg " 2014/08/21 19:27:16

    The preferred method to do this is using the command plymouth-set-default-theme.

    If you enter this command, without parameters, as user root you'll see something like
    >plymouth-set-default-theme
    charge
    details
    text

    This lists the themes installed on your computer. The default is 'charge'. If you want to see the boot up details you used to see in version 6, try
    >plymouth-set-default-theme details

    Followed by the command
    >dracut -f

    Then reboot.

    This process modifies the boot loader so you won't have to update your grub.conf file manually everytime for each new kernel update.

    There are numerous themes available you can download from CentOS or in general. Just google 'plymouth themes' to see other possibilities, if you're looking for graphics type screens. Top

    User avatar TrevorH
    Forum Moderator
    Posts: 27492
    Joined: 2009/09/24 10:40:56
    Location: Brighton, UK
    Re: disable startup graphic

    Post by TrevorH " 2014/08/21 22:47:49

    Editing /etc/default/grub to remove rhgb quiet makes it permanent too. CentOS 6 will die in November 2020 - migrate sooner rather than later!
    CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
    Full time Geek, part time moderator. Use the FAQ Luke Top
    MalAdept
    Posts: 1
    Joined: 2014/11/02 20:06:27
    Re: disable startup graphic

    Post by MalAdept " 2014/11/02 20:23:37

    I tried both TrevorH's and LarryG's methods, and LarryG wins.

    Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want, but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put "rhgb quiet" back in and got my 1920x1080 back.

    Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting without loss of display resolution. Thanks LarryG! Top

    dunwell
    Posts: 116
    Joined: 2010/12/20 18:49:52
    Location: Colorado
    Contact: Contact dunwell
    Re: disable startup graphic

    Post by dunwell " 2015/12/13 00:17:18

    I have used this mod to get back the details for grub boot, thanks to all for that info.

    However when I am watching it fills the page and then rather than scrolling up as it did in V5 it blanks and starts again at the top. Of course there is FAIL message right before it blanks :lol: that I want to see and I can't slam the Scroll Lock fast enough to catch it. Anyone know how to get the details to scroll up rather than the blank and re-write?

    Alan D. Top

    aks
    Posts: 2915
    Joined: 2014/09/20 11:22:14
    Re: disable startup graphic

    Post by aks " 2015/12/13 09:15:51

    Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
    If you do boot, you can always use journalctrl to view the logs.
    In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure. Top
    dunwell
    Posts: 116
    Joined: 2010/12/20 18:49:52
    Location: Colorado
    Contact: Contact dunwell
    Re: disable startup graphic

    Post by dunwell " 2015/12/13 14:18:29

    aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
    If you do boot, you can always use journalctrl to view the logs.
    In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure.
    Thanks for the followup aks. Actually I have found that the Scroll Lock does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it before the screen blanks and then starts writing again. What I am really wondering is how to get the screen to scroll up when it gets to the bottom of the screen rather than blanking and starting to write again at the top. That is annoying! :x

    Alan D. Top

    aks
    Posts: 2915
    Joined: 2014/09/20 11:22:14
    Re: disable startup graphic

    Post by aks " 2015/12/13 19:14:29

    Yes it is and no you can't. Kudos to Lennard for making or lives so much shitter....

    [Feb 05, 2020] How do deactivate plymouth boot screen?

    Jan 01, 2012 | askubuntu.com

    Ask Question Asked 8 years ago Active 7 years, 7 months ago Viewed 57k times


    > ,

    11

    Jo-Erlend Schinstad , 2012-01-25 22:06:57

    Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using Precise, but I suspect a solution for 11.10 would work just as well.

    WinEunuuchs2Unix , 2017-07-21 22:08:06

    Did you try: sudo update-initramfs – mgajda Jun 19 '12 at 0:54

    > ,

    17

    Panther ,

    Easiest quick fix is to edit the grub line as you boot.

    Hold down the shift key so you see the menu. Hit the e key to edit

    Edit the 'linux' line, remove the 'quiet' and 'splash'

    To disable it in the long run

    Edit /etc/default/grub

    Change the line – GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to

    GRUB_CMDLINE_LINUX_DEFAULT=""
    

    And then update grub

    sudo update-grub
    

    Panther , 2016-10-27 15:43:04

    Removing quiet and splash removes the splash, but I still only have a purple screen with no text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at 22:25

    Tuminoid ,

    How about pressing CTRL+ALT+F2 for console allowing you to see whats going on.. You can go back to GUI/Plymouth by CTRL+ALT+F7 .

    Don't have my laptop here right now, but IIRC Plymouth has upstart job in /etc/init , named plymouth???.conf, renaming that probably achieves what you want too more permanent manner.

    Jānis Elmeris , 2013-12-03 08:46:54

    No, there's nothing on the other consoles. – Jo-Erlend Schinstad Jan 25 '12 at 22:22

    [Feb 01, 2020] Basic network troubleshooting in Linux with nmap Enable Sysadmin

    Feb 01, 2020 | www.redhat.com

    Determine this host's OS with the -O switch:

    $ sudo nmap -O <Your-IP>
    

    The results look like this:

    ....

    [ You might also like: Six practical use cases for Nmap ]

    Then, run the following to check the common 2000 ports, which handle the common TCP and UDP services. Here, -Pn is used to skip the ping scan after assuming that the host is up:

    $ sudo nmap -sS -sU -PN <Your-IP>
    

    The results look like this:

    ...

    Note: The -Pn combo is also useful for checking if the host firewall is blocking ICMP requests or not.

    Also, as an extension to the above command, if you need to scan all ports instead of only the 2000 ports, you can use the following to scan ports from 1-66535:

    $ sudo nmap -sS -sU -PN -p 1-65535 <Your-IP>
    

    The results look like this:

    ...

    You can also scan only for TCP ports (default 1000) by using the following:

    $ sudo nmap -sT <Your-IP>
    

    The results look like this:

    ...

    Now, after all of these checks, you can also perform the "all" aggressive scans with the -A option, which tells Nmap to perform OS and version checking using -T4 as a timing template that tells Nmap how fast to perform this scan (see the Nmap man page for more information on timing templates):

    $ sudo nmap -A -T4 <Your-IP>
    

    The results look like this, and are shown here in two parts:

    ...

    There you go. These are the most common and useful Nmap commands. Together, they provide sufficient network, OS, and open port information, which is helpful in troubleshooting. Feel free to comment with your preferred Nmap commands as well.

    [ Readers also liked: My 5 favorite Linux sysadmin tools ]

    Related Stories:

    [Jan 25, 2020] timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time

    You can achieve the same affect with at command which allows more flexible time patterns.
    Jan 23, 2020 | linuxize.com

    timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time. In other words, timeout allows you to run a command with a time limit. The timeout command is a part of the GNU core utilities package which is installed on almost any Linux distribution.

    It is handy when you want to run a command that doesn't have a built-in timeout option.

    In this article, we will explain how to use the Linux timeout command.

    How to Use the timeout Command #

    The syntax for the timeout command is as follows:

    timeout [OPTIONS] DURATION COMMAND [ARG]
    

    The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:

    When no unit is used, it defaults to seconds. If the duration is set to zero, the associated timeout is disabled.

    The command options must be provided before the arguments.

    Here are a few basic examples demonstrating how to use the timeout command:

    If you want to run a command that requires elevated privileges such as tcpdump , prepend sudo before timeout :

    sudo timeout 300 tcpdump -n -w data.pcap
    
    Sending Specific Signal #

    If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is reached. You can specify which signal to send using the -s ( --signal ) option.

    For example, to send SIGKILL to the ping command after one minute you would use:

    sudo timeout -s SIGKILL ping 8.8.8.8
    

    The signal can be specified by its name like SIGKILL or its number like 9 . The following command is identical to the previous one:

    sudo timeout -s 9 ping 8.8.8.8
    

    To get a list of all available signals, use the kill -l command:

    kill -l
    
    Killing Stuck Processes #

    SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes. In that situations, the process continues to run after the termination signal is send.

    To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL signal to the managed program that cannot be caught or ignored.

    In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after ten seconds:

    sudo timeout -k 10 1m ping 8.8.8.8
    

    timeout -k "./test.sh"

    killed after the given time limit is reached

    Preserving the Exit Status #

    timeout returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command.

    To return the exit status of the command even when the time limit is reached, use the --preserve-status option:

    timeout --preserve-status 5 ping 8.8.8.8
    
    Running in Foreground #

    By default, timeout runs the managed command in the background. If you want to run the command in the foreground, use the --foreground option:

    timeout --foreground 5m ./script.sh
    

    This option is useful when you want to run an interactive command that requires user input.

    Conclusion #

    The timeout command is used to run a given command with a time limit.

    timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only with two arguments, the duration, and the managed command.

    If you have any questions or feedback, feel free to leave a comment.

    timeout terminal

    Related Tutorials

    If you like our content, please consider buying us a coffee.
    Thank you for your support!

    Buy me a coffee

    Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

    Subscribe

    We'll never share your email address or spam you.

    Jan 25, 2020

    Pidof Command in Linux
    <img alt="" src=/post/pidof-command-in-linux/featured.jpg>

    Jan 22, 2020

    Tcpdump Command in Linux
    <img alt="" src=/post/tcpdump-command-in-linux/featured.jpg>

    Jan 17, 2020

    Id command in Linux
    <img alt="" src=/post/id-command-in-linux/featured.jpg>
    Write a comment Please enable JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a> ESC 2020 Linuxize.com Privacy Policy Contact <div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1" width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>

    [Jan 16, 2020] Watch Command in Linux

    Jan 16, 2020 | linuxhandbook.com

    Last Updated on January 10, 2020 By Abhishek Leave a Comment

    Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
    watch [options] [command]
    
    Watch command examples
    Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?ssl=1" alt="Watch Command" srcset="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?w=800&amp;ssl=1 800w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=300%2C169&amp;ssl=1 300w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

    Using watch command without any options will use the default parameter of 2.0 second refresh intervals.

    As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the free command . This will give you up to date information about our system's memory usage.

    watch free
    

    Yes, it is that simple my friends.

    Every 2.0s: free                                pop-os: Wed Dec 25 13:47:59 2019
    
                  total        used        free      shared  buff/cache   available
    Mem:       32596848     3846372    25571572      676612     3178904    27702636
    Swap:             0           0           0
    
    Adjust refresh rate of watch command

    You can easily change how quickly the output is updated using the -n flag.

    watch -n 10 free
    
    Every 10.0s: free                               pop-os: Wed Dec 25 13:58:32 2019
    
                  total        used        free      shared  buff/cache   available
    Mem:       32596848     4522508    24864196      715600     3210144    26988920
    Swap:             0           0           0
    

    This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our output.

    Remove title or header info from watch command output
    watch -t free
    

    The -t flag removes the title/header information to clean up output. The information will still refresh every 2 seconds but you can change that by combining the -n option.

                  total        used        free      shared  buff/cache   available
    Mem:       32596848     3683324    25089268     1251908     3824256    27286132
    Swap:             0           0           0
    
    Highlight the changes in watch command output

    You can add the -d option and watch will automatically highlight changes for us. Let's take a look at this using the date command. I've included a screen capture to show how the highlighting behaves.

    Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
    Using pipes with watch

    You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of this software. Pipes rely on the | symbol. Not coincidentally, this is called a pipe symbol or sometimes a vertical bar symbol.

    watch "cat /var/log/syslog | tail -n 3"
    

    While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2 seconds and any changes will be displayed.

    Every 2.0s: cat /var/log/syslog | tail -n 3                                                      pop-os: Wed Dec 25 15:18:06 2019
    
    Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
    er.Extract'
    Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
    Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.
    

    Conclusion

    Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.

    This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses that you would like to share, let us know about them in the comments.

    [Jan 16, 2020] Linux tools How to use the ss command by Ken Hess (Red Hat)

    ss is the Swiss Army Knife of system statistics commands. It's time to say buh-bye to netstat and hello to ss.
    Jan 13, 2020 | www.redhat.com

    If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and netstat . The new replacements are ip , dig , and ss , respectively. It's time to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth a mention here because part of netstat 's functionality has been replaced by ip . This article covers the essentials for the ss command so that you don't have to dig (no pun intended) for them.

    More Linux resources

    Formally, ss is the socket statistics command that replaces netstat . In this article, I provide netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it easy for us to transition into ss from netstat by making some of netstat 's options operate in much the same fashion in ss .

    For example, to display TCP sockets, use the -t option:

    $ netstat -t
    Active Internet connections (w/o servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State      
    tcp        0      0 rhel8:ssh               khess-mac:62036         ESTABLISHED
    
    $ ss -t
    State         Recv-Q          Send-Q                    Local Address:Port                   Peer Address:Port          
    ESTAB         0               0                          192.168.1.65:ssh                    192.168.1.94:62036
    

    You can see that the information given is essentially the same, but to better mimic what you see in the netstat command, use the -r (resolve) option:

    $ ss -tr
    State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
    ESTAB            0                  0                                       rhel8:ssh                             khess-mac:62036
    

    And to see port numbers rather than their translations, use the -n option:

    $ ss -ntr
    State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
    ESTAB            0                  0                                       rhel8:22                              khess-mac:62036
    

    It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So, try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly surprised at the results.

    For example, the netstat command with the old standby options -an yield comparable results (which are too long to show here in full):

    $ netstat -an |grep LISTEN
    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
    tcp6       0      0 :::22                   :::*                    LISTEN     
    unix  2      [ ACC ]     STREAM     LISTENING     28165    /run/user/0/systemd/private
    unix  2      [ ACC ]     STREAM     LISTENING     20942    /var/lib/sss/pipes/private/sbus-dp_implicit_files.642
    unix  2      [ ACC ]     STREAM     LISTENING     28174    /run/user/0/bus
    unix  2      [ ACC ]     STREAM     LISTENING     20241    /var/run/lsm/ipc/simc
    <truncated>
    
    $ ss -an |grep LISTEN
    
    u_str             LISTEN              0                    128                                             /run/user/0/systemd/private 28165                  * 0                   
                                                                
    u_str             LISTEN              0                    128                   /var/lib/sss/pipes/private/sbus-dp_implicit_files.642 20942                  * 0                   
                                                                
    u_str             LISTEN              0                    128                                                         /run/user/0/bus 28174                  * 0                   
                                                                
    u_str             LISTEN              0                    5                                                     /var/run/lsm/ipc/simc 20241                  * 0                   
    <truncated>
    

    The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So, there are layout differences even though the displayed information is really the same.

    If you're wondering which netstat commands have been replaced by the ip command, here's one for you:

    $ netstat -g
    IPv6/IPv4 Group Memberships
    Interface       RefCnt Group
    --------------- ------ ---------------------
    lo              1      all-systems.mcast.net
    enp0s3          1      all-systems.mcast.net
    lo              1      ff02::1
    lo              1      ff01::1
    enp0s3          1      ff02::1:ffa6:ab3e
    enp0s3          1      ff02::1:ff8d:912c
    enp0s3          1      ff02::1
    enp0s3          1      ff01::1
    
    $ ip maddr
    1:	lo
    	inet  224.0.0.1
    	inet6 ff02::1
    	inet6 ff01::1
    2:	enp0s3
    	link  01:00:5e:00:00:01
    	link  33:33:00:00:00:01
    	link  33:33:ff:8d:91:2c
    	link  33:33:ff:a6:ab:3e
    	inet  224.0.0.1
    	inet6 ff02::1:ffa6:ab3e
    	inet6 ff02::1:ff8d:912c
    	inet6 ff02::1
    	inet6 ff01::1
    

    The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can try this one for yourself to compare the two:

    $ netstat -s 
    
    Ip:
        Forwarding: 2
        6231 total packets received
        2 with invalid addresses
        0 forwarded
        0 incoming packets discarded
        3104 incoming packets delivered
        2011 requests sent out
        243 dropped because of missing route
    <truncated>
    
    $ ss -s
    
    Total: 182
    TCP:   3 (estab 1, closed 0, orphaned 0, timewait 0)
    
    Transport Total     IP        IPv6
    RAW	  1         0         1        
    UDP	  3         2         1        
    TCP	  3         2         1        
    INET	  7         4         3        
    FRAG	  0         0         0
    

    If you figure out how to display the same info with ss , please let me know.

    Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.

    What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're significantly improving an existing utility, why bother deprecating the other?

    There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually embrace ss as its successor.

    Want more on networking topics? Check out the Linux networking cheat sheet .

    Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. More about me

    [Jan 16, 2020] Thirteen Useful Tools for Working with Text on the Command Line - Make Tech Easier

    Jan 16, 2020 | www.maketecheasier.com

    Thirteen Useful Tools for Working with Text on the Command Line By Karl Wakim Posted on Jan 9, 2020 Jan 9, 2020 in Linux Text Tool Linux Command Line Featured

    GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used correctly.

    Here are thirteen powerful text manipulation tools every command-line user should know.

    1. cat

    Cat was designed to con cat enate files but is most often used to display a single file. Without any arguments, cat reads standard input until Ctrl + D is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly specified with a - .

    Cat has a number of useful options, notably:

    In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.

    cat -n file1 - file3
    
    Linux Text Tools Cat
    2. sort

    As its name suggests, sort sorts file contents alphabetically and numerically.

    Linux Text Tools Sort
    3. uniq

    Uniq takes a sorted file and removes duplicate lines. It is often chained with sort in a single command.

    Linux Text Tools Uniq
    4. comm

    Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain lines unique to the first and second file respectively, and the third displays those found in both files.

    Linux Text Tools Comm
    5. cut

    Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file or from standard input if no file is specified.

    Cutting by character position

    The -c option specifies a single character position or one or more ranges of characters.

    For example:

    Linux Text Tools Cut Char

    Cutting by field

    Fields are separated by a delimiter consisting of a single character, which is specified with the -d option. The -f option selects a field position or one or more ranges of fields using the same format as above.

    Linux Text Tools Cut Field
    6. dos2unix

    GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It converts CRLF terminators to LF.

    In the following example, the file command is used to check the text format before and after using dos2unix .

    Linux Text Tools Dos2unix
    7. fold

    To make long lines of text easier to read and handle, you can use fold , which wraps lines to a specified width.

    Fold strictly matches the specified width by default, breaking words where necessary.

    fold -w 30 longline.txt
    
    Linux Text Tools Fold

    If breaking words is undesirable, you can use the -s option to break at spaces.

    fold -w 30 -s longline.txt
    
    Linux Text Tools Fold Spaces
    8. iconv

    This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.

    iconv -f input_encoding -t output_encoding -o output_file input_file
    

    Note: you can list the available encodings with iconv -l

    9. sed

    sed is a powerful and flexible s tream ed itor, most commonly used to find and replace strings with the following syntax.

    The following command will read from the specified file (or standard input), replacing the parts of text that match the regular expression pattern with the replacement string and outputting the result to the terminal.

    sed s/pattern/replacement/g filename
    

    To modify the original file instead, you can use the -i flag.

    Linux Text Tools Sed
    10. wc

    The wc utility prints the number of bytes, characters, words, or lines in a file.

    Linux Text Tools Wc
    11. split

    You can use split to divide a file into smaller files, by number of lines, by size, or to a specific number of files.

    Splitting by number of lines

    split -l num_lines input_file output_prefix
    
    Linux Text Tools Split Lines

    Splitting by bytes

    split -b bytes input_file output_prefix
    
    Linux Text Tools Split Bytes

    Splitting to a specific number of files

    split -n num_files input_file output_prefix
    
    Linux Text Tools Split Number
    12. tac

    Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.

    Linux Text Tools Tac
    13. tr

    The tr tool is used to translate or delete sets of characters.

    A set of characters is usually either a string or ranges of characters. For instance:

    Refer to the tr manual page for more details.

    To translate one set to another, use the following syntax:

    tr SET1 SET2
    

    For instance, to replace lowercase characters with their uppercase equivalent, you can use the following:

    tr "a-z" "A-Z"
    
    Linux Text Tools Tr

    To delete a set of characters, use the -d flag.

    tr -d SET
    
    Linux Text Tools Tr D

    To delete the complement of a set of characters (i.e. everything except the set), use -dc .

    tr -dc SET
    
    Linux Text Tools Tr Dc
    Conclusion

    There is plenty to learn when it comes to Linux command line. Hopefully, the above commands can help you to better deal with text in the command line.

    Continued


    [Jan 29, 2019] hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history

    This is quite useful command. RPM exists for CentOS7. You need to build on other versions.
    Nov 17, 2018 | dvorka.github.io

    View on GitHub

    Configuration

    Get most of HSTR by configuring it with:

    hstr --show-configuration >> ~/.bashrc

    Run hstr --show-configuration to determine what will be appended to your Bash profile. Don't forget to source ~/.bashrc to apply changes.


    For more configuration options details please refer to:

    Check also configuration examples .

    Binding HSTR to Keyboard Shortcut

    Bash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard shortcut based on the style you prefer below.

    Check your active Bash keymap with:

    bind -v | grep editing-mode
    bind -v | grep keymap

    To determine character sequence emitted by a pressed key in terminal, type Ctrl-v and then press the key. Check your current bindings using:

    bind -S
    Bash Emacs Keymap (default)

    Bind HSTR to a Bash key e.g. to Ctrl-r :

    bind '"\C-r": "\C-ahstr -- \C-j"'
    

    or Ctrl-Altr :

    bind '"\e\C-r":"\C-ahstr -- \C-j"'
    

    or Ctrl-F12 :

    bind '"\e[24;5~":"\C-ahstr -- \C-j"'
    

    Bind HSTR to Ctrl-r only if it is interactive shell:

    if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fi
    

    You can bind also other HSTR commands like --kill-last-command :

    if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fi
    
    Bash Vim Keymap

    Bind HSTR to a Bash key e.g. to Ctrlr :

    bind '"\C-r": "\e0ihstr -- \C-j"'
    
    Zsh Emacs Keymap

    Bind HSTR to a zsh key e.g. to Ctrlr :

    bindkey -s "\C-r" "\eqhstr --\n"
    
    Alias

    If you want to make running of hstr from command line even easier, then define alias in your ~/.bashrc :

    alias hh=hstr

    Don't forget to source ~/.bashrc to be able to to use hh command.

    Colors

    Let HSTR to use colors:

    export HSTR_CONFIG=hicolor
    

    or ensure black and white mode:

    export HSTR_CONFIG=monochromatic
    
    Default History View

    To show normal history by default (instead of metrics-based view, which is default) use:

    export HSTR_CONFIG=raw-history-view
    

    To show favorite commands as default view use:

    export HSTR_CONFIG=favorites-view
    
    Filtering

    To use regular expressions based matching:

    export HSTR_CONFIG=regexp-matching
    

    To use substring based matching:

    export HSTR_CONFIG=substring-matching
    

    To use keywords (substrings whose order doesn't matter) search matching (default):

    export HSTR_CONFIG=keywords-matching
    

    Make search case sensitive (insensitive by default):

    export HSTR_CONFIG=case-sensitive
    

    Keep duplicates in raw-history-view (duplicate commands are discarded by default):

    export HSTR_CONFIG=duplicates
    
    Static favorites

    Last selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and make favorite commands list static, then use the following configuration:

    export HSTR_CONFIG=static-favorites
    
    Skip favorites comments

    If you don't want to show lines starting with # (comments) among favorites, then use the following configuration:

    export HSTR_CONFIG=skip-favorites-comments
    
    Blacklist

    Skip commands when processing history i.e. make sure that these commands will not be shown in any view:

    export HSTR_CONFIG=blacklist
    

    Commands to be stored in ~/.hstr_blacklist file with trailing empty line. For instance:

    cd
    my-private-command
    ls
    ll
    
    Confirm on Delete

    Do not prompt for confirmation when deleting history items:

    export HSTR_CONFIG=no-confirm
    
    Verbosity

    Show a message when deleting the last command from history:

    export HSTR_CONFIG=verbose-kill
    

    Show warnings:

    export HSTR_CONFIG=warning
    

    Show debug messages:

    export HSTR_CONFIG=debug
    
    Bash History Settings

    Use the following Bash settings to get most out of HSTR.

    Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file size (default value is 500):

    export HISTFILESIZE=10000
    export HISTSIZE=${HISTFILESIZE}
    

    Ensure syncing (flushing and reloading) of .bash_history with in-memory history:

    export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
    

    Force appending of in-memory history to .bash_history (instead of overwriting):

    shopt -s histappend
    

    Use leading space to hide commands from history:

    export HISTCONTROL=ignorespace
    

    Suitable for a sensitive information like passwords.

    zsh History Settings

    If you use zsh , set HISTFILE environment variable in ~/.zshrc :

    export HISTFILE=~/.zsh_history
    Examples

    More colors with case sensitive search of history:

    export HSTR_CONFIG=hicolor,case-sensitive

    Favorite commands view in black and white with prompt at the bottom of the screen:

    export HSTR_CONFIG=favorites-view,prompt-bottom

    Keywords based search in colors with debug mode verbosity:

    export HSTR_CONFIG=keywords-matching,hicolor,debug

    [Nov 17, 2018] hh command man page

    Later was renamed to hstr
    Notable quotes:
    "... By default it parses .bash-history file that is filtered as you type a command substring. ..."
    "... Favorite and frequently used commands can be bookmarked ..."
    Nov 17, 2018 | www.mankier.com

    hh -- easily view, navigate, sort and use your command history with shell history suggest box.

    Synopsis

    hh [option] [arg1] [arg2]...
    hstr [option] [arg1] [arg2]...

    Description

    hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

    Options
    -h --help
    Show help
    -n --non-interactive
    Print filtered history on standard output and exit
    -f --favorites
    Show favorites view immediately
    -s --show-configuration
    Show configuration that can be added to ~/.bashrc
    -b --show-blacklist
    Show blacklist of commands to be filtered out before history processing
    -V --version
    Show version information
    Keys
    pattern
    Type to filter shell history.
    Ctrl-e
    Toggle regular expression and substring search.
    Ctrl-t
    Toggle case sensitive search.
    Ctrl-/ , Ctrl-7
    Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
    Ctrl-f
    Add currently selected command to favorites.
    Ctrl-l
    Make search pattern lowercase or uppercase.
    Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
    Navigate in the history list.
    TAB , RIGHT arrow
    Choose currently selected item for completion and let user to edit it on the command prompt.
    LEFT arrow
    Choose currently selected item for completion and let user to edit it in editor (fix command).
    ENTER
    Choose currently selected item for completion and execute it.
    DEL
    Remove currently selected item from the shell history.
    BACSKSPACE , Ctrl-h
    Delete last pattern character.
    Ctrl-u , Ctrl-w
    Delete pattern and search again.
    Ctrl-x
    Write changes to shell history and exit.
    Ctrl-g
    Exit with empty prompt.
    Environment Variables

    hh defines the following environment variables:

    HH_CONFIG
    Configuration options:

    hicolor
    Get more colors with this option (default is monochromatic).

    monochromatic
    Ensure black and white view.

    prompt-bottom
    Show prompt at the bottom of the screen (default is prompt at the top).

    regexp
    Filter command history using regular expressions (substring match is default)

    substring
    Filter command history using substring.

    keywords
    Filter command history using keywords - item matches if contains all keywords in pattern in any order.

    casesensitive
    Make history filtering case sensitive (it's case insensitive by default).

    rawhistory
    Show normal history as a default view (metric-based view is shown otherwise).

    favorites
    Show favorites as a default view (metric-based view is shown otherwise).

    duplicates
    Show duplicates in rawhistory (duplicates are discarded by default).

    blacklist
    Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

    big-keys-skip
    Skip big history entries i.e. very long lines (default).

    big-keys-floor
    Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

    big-keys-exit
    Exit (fail) on presence of a big key in history (big keys are skipped by default).

    warning
    Show warning.

    debug
    Show debug information.

    Example:
    export HH_CONFIG=hicolor,regexp,rawhistory

    HH_PROMPT
    Change prompt string which is user@host$ by default.

    Example:
    export HH_PROMPT="$ "

    Files
    ~/.hh_favorites
    Bookmarked favorite commands.
    ~/.hh_blacklist
    Command blacklist.
    Bash Configuration

    Optionally add the following lines to ~/.bashrc:

    export HH_CONFIG=hicolor         # get more colors
    shopt -s histappend              # append new history items to .bash_history
    export HISTCONTROL=ignorespace   # leading space hides commands from history
    export HISTFILESIZE=10000        # increase history file size (default is 500)
    export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
    export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
    # if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
    if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi
    

    The prompt command ensures synchronization of the history between BASH memory and history file.

    ZSH Configuration

    Optionally add the following lines to ~/.zshrc:

    export HISTFILE=~/.zsh_history   # ensure history file visibility
    export HH_CONFIG=hicolor         # get more colors
    bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
    
    Examples
    hh git
    Start `hh` and show only history items containing 'git'.
    hh --non-interactive git
    Print history items containing 'git' to standard output and exit.
    hh --show-configuration >> ~/.bashrc
    Append default hh configuration to your Bash profile.
    hh --show-blacklist
    Show blacklist configured for history processing.
    Author

    Written by Martin Dvorak <[email protected]>

    Bugs

    Report bugs to https://github.com/dvorka/hstr/issues

    See Also

    history(1), bash(1), zsh(1)

    Referenced By

    The man page hstr(1) is an alias of hh(1).

    [Oct 10, 2018] Bash History Display Date And Time For Each Command

    Oct 10, 2018 | www.cyberciti.biz
    1. Abhijeet Vaidya says: March 11, 2010 at 11:41 am End single quote is missing.
      Correct command is:
      echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile 
    2. izaak says: March 12, 2010 at 11:06 am I would also add
      $ echo 'export HISTSIZE=10000' >> ~/.bash_profile

      It's really useful, I think.

    3. Dariusz says: March 12, 2010 at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:
      # Make sure all terminals save history
      shopt -s histappend histreedit histverify
      shopt -s no_empty_cmd_completion # bash>=2.04 only

      # Whenever displaying the prompt, write the previous line to disk:

      PROMPT_COMMAND='history -a'

      #Use GREP color features by default: This will highlight the matched words / regexes
      export GREP_OPTIONS='color=auto'
      export GREP_COLOR='1;37;41′

    4. Babar Haq says: March 15, 2010 at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to log the IP that command was run from?
      Thanks in advance.
      1. Anthony says: August 21, 2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):

        export HISTTIMEFORMAT="%F %T : $(echo $SSH_CONNECTION | cut -d\ -f1) : "

        will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing that out, Cadrian, I never knew about that before), all right there in your history output.

        You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's not helpful).

    5. cadrian says: March 16, 2010 at 5:55 pm Yup, you can export one of this

      env | grep SSH
      SSH_CLIENT=192.168.78.22 42387 22
      SSH_TTY=/dev/pts/0
      SSH_CONNECTION=192.168.78.22 42387 192.168.36.76 22

      As their bash history filename

      set |grep -i hist
      HISTCONTROL=ignoreboth
      HISTFILE=/home/cadrian/.bash_history
      HISTFILESIZE=1000000000
      HISTSIZE=10000000

      So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)

    6. TSI says: March 21, 2010 at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file of bash: http://tiswww.case.edu/php/chet/bash/NEWS
      If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.
    7. Dinesh Jadhav says: November 12, 2010 at 11:00 am This is good command, It helps me a lot.
    8. Indie says: September 19, 2011 at 11:41 am You only need to use
      export HISTTIMEFORMAT='%F %T '

      in your .bash_profile

    9. lalit jain says: October 3, 2011 at 9:58 am -- show history with date & time

      # HISTTIMEFORMAT='%c '
      #history

    10. Sohail says: January 13, 2012 at 7:05 am Hi
      Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's) timestamp.

      Please advice.

      Regards

      1. Raymond says: March 15, 2012 at 9:05 am Hi Sohail,

        Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers your concern.

        Thanks!

        1. Raymond says: March 15, 2012 at 9:08 am Hi Sohail,

          Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.

          Thanks!

    11. Sohail says: February 24, 2012 at 6:45 am Hi

      The command only lists the current date (Today) even for those commands which were executed on earlier days.

      Any solutions ?

      Regards

    12. nitiratna nikalje says: August 24, 2012 at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota and system administration is over.iptables ,sendmail and dns is remaining.

      -9029917299(Nitiratna)

    13. JMathew says: August 26, 2012 at 10:51 pm Hi,

      Is there anyway to log username also along with the Command Which we typed

      Thanks in Advance

    14. suresh says: May 22, 2013 at 1:42 pm How can i get full comman along with data and path as we het in history command.
    15. rajesh says: December 6, 2013 at 5:56 am Thanks it worked..
    16. Krishan says: February 7, 2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where as I ran the some command three before.

      How come it is displaying the today date

    17. PR says: April 29, 2014 at 5:18 pm Hi..

      I want to collect the history of particular user everyday and want to send an email.I wrote below script.
      for collecting everyday history by time shall i edit .profile file of that user
      echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
      Script:

      #!/bin/bash
      #This script sends email of particular user
      history >/tmp/history
      if [ -s /tmp/history ]
      then
             mailx -s "history 29042014"  </tmp/history
                 fi
      rm /tmp/history
      #END OF THE SCRIPT
      

      Can any one suggest better way to collect particular user history for everyday

    18. lefty.crupps says: October 24, 2014 at 7:10 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
      https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

      In that case, myne looks like this:
      echo 'export HISTTIMEFORMAT="%YY-%m-%d/ %T "' >> ~/.bashrc

      Thanks for the tip!

      1. lefty.crupps says: October 24, 2014 at 7:11 pm please delete post 33, my command is messed up.
    19. lefty.crupps says: October 24, 2014 at 7:11 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
      https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

      In that case, myne looks like this:
      echo ‘export HISTTIMEFORMAT=%Y-%m-%d %T “‘ >> ~/.bashrc

      Thanks for the tip!

    20. Vanathu says: October 30, 2014 at 1:01 am its show only current date for all the command history
      1. lefty.crupps says: October 30, 2014 at 2:08 am it's marking all of your current history with today's date. Try checking again in a few days.
    21. tinu says: October 14, 2015 at 3:30 pm Hi All,

      I Have enabled my history with the command given :
      echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile

      i need to know how i can add the ip's also , from which the commands are fired to the system.

    [Nov 28, 2017] Sometimes the Old Ways Are Best by Brian Kernighan

    Notable quotes:
    "... Sometimes the old ways are best, and they're certainly worth knowing well ..."
    Nov 01, 2008 | IEEE Software, pp.18-19

    As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.

    ... ... ...

    Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.

    But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.

    On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

    Recommended Links

    Google matched content

    Softpanorama Recommended

    Top articles

    [May 28, 2021] Microsoft Launches personal version of Teams with free all-day video calling Published on May 16, 2021 | slashdot.org

    [May 10, 2021] The Tilde Text Editor Published on May 10, 2021 | os.ghalkes.nl

    [Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux Published on Dec 30, 2020 | blog.microlinux.fr

    [Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco Published on Dec 28, 2020 | blog.centos.org

    [Nov 22, 2020] Programmable editor as sysadmin tool Published on Oct 05, 2020 | perlmonks.org

    [Jul 07, 2020] The Missing Readline Primer by Ian Miell Published on Jul 07, 2020 | zwischenzugs.com

    [Jul 05, 2020] Learn Bash the Hard Way by Ian Miell [Leanpub PDF-iPad-Kindle] Published on Jul 05, 2020 | leanpub.com

    [Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell Published on Jul 04, 2020 | zwischenzugs.com

    [Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell Published on Oct 02, 2019 | opensource.com

    [Apr 08, 2020] How to Use rsync and scp Commands in Reverse Mode on Linux Published on Apr 08, 2020 | www.2daygeek.com

    [Mar 05, 2020] Cloud computing More costly, complicated and frustrating than expected by Daphne Leprince-Ringuet Published on Feb 27, 2020 | www.zdnet.com

    Oldies But Goodies

    Sites

    Please visit nixCraft site. It has material well worth your visit.

    Dr. Nikolai Bezroukov

    Spying on your search engines ;-)

    Bing sometimes beat Google in Unix searches

    Directories

    Professional societies:

    Portals and collections of links

    Forums

    Other E-books

    LDP e-books

    [Apr 21, 1999] Linux Administration Made Easy by Steve Frampton, < [email protected]> v0.99u.01 (PRE-RELEASE), 21 April 1999. A new LDP book.

    The Network Administrators' Guide by Olaf Kirk


    E-books, Courses, Tutorials

    Online Libraries Mark
    Burgess
    USAIL Digital Unix System Administation e-book LDP e-books Other e-books

    Online Libraries

    Recommended Articles

    Burnout, and Other Social Isuues

    Extreme cult

    The FreeBSD Diary -- System tools - toys I have found -- short discussion of last, swapinfo, systat, tops and z-tools.

    Tom Limoncelli's Published Papers

    Principles of system administration - Table of Contents

    Look like that original USAIL site disappeared, but some mirrors still exist USAIL Unix system administration independent learning



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last updated: July 21, 2021