The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)
This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully
outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment, some new
"for profit" technocults like DevOps, and outsourcing. Large swats of
Linux knowledge (and many excellent books) were made obsolete by Red Hat with the introduction of
systemd. Especially affected are older, most
experienced members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed
them to watch the development of Linux almost from the version 0.92.
System administration is still a unique area were people with the ability to program can display their own creativity with
relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement
it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide.
But the conditions are getting worse and worse. That's why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick
the field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process
and learn new information. Even for them "enough is enough". The answer is different for each individual sysadmins, but
usually is some variation of the following themes:
Too rapid pace of change with a lot of "change for the sake of the change" often serving as smokescreen
for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
Excessive automation can be a problem. It increases the number of layers between fundamental process and sysadmin. and
thus it makes troubleshooting much harder. Moreover often it does not produce tangible benefits in comparison with simpler
tools while dramatically increasing the level of complexity of environment. See Unix Configuration
Management Tools for deeper discussion of this issue.
Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of "efficiency" which
in reality is more connected with the size of top brass bonuses than anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case they are laid off have almost no chances to get back into the
IT workforce at the previous level of salary/benefits. Often the only job they can find is job in Home Depot, or similar
retail outlets. See Over 50 and unemployed
Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ).
"Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface shine is more important than the substance. The marketing brochures and manuals are no
different from mainstream news media stories in the level of BS they spew. IBM is especially guilty (look how they marketed
IBM Watson;
as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department
[push]").
Bureaucratization/fossilization of the large companies IT environment.
That includes using "Performance Reviews"
(prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc. See
Office Space (1999) - IMDb for humorous take on IT culture.
That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization
where the administration has tremendously more power in the decision-making process and eats up more of the corporate budget,
while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks. Purchasing of
"non-standard" software or hardware is often so complicated that it never tried even if benefits are tangible.
"Neoliberal austerity" (which is
essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as
elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or
outright harmful for business "new" products instead of "tried and true" old with the same function. They
are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal
speak" means he/she is a toothless conformist ;-), glorification of groupthink, and the intensification of
surveillance.
What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application
teams, server teams, storage teams, and network teams. There were often at least a few people, the holders
of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear,
and how every server was configured -- these people could save a business in times of disaster.
Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons
between all the IT groups.
Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.
In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know
everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket
for people to turn a blind eye.
Specialization
You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with
a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in
learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is
run by people who specialize in those elements. Everything is taken care of.
Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching
the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage
gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm
about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one
thing.
If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get.
Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they
indicate specialization or compensation for lack of experience.
Resource Competition
Does your IT department function as a unit? Even 20-person IT shops have turf wars,
so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget
gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they
will realize how important the team is.
The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management
having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario.
If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due
to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the
correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and
on, the arguments continue.
Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example,
what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting
in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT
groups.
With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most
groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers
to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to
various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their
own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction.
Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing
groups, a viable option.
Blamestorming
The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus
them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is
really about. When a project is delayed, it is all too easy to blame another group.
The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work
halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop
working for a while.
More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system
outage.
Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing
down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on
IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine.
You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers
to help diagnose the problem. The server team doesn't even know how the application runs.
See the problem? Specialized teams are distinct and by nature adversarial. Specialized
staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone
else" will take care of all the other pieces.
I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They
gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders
The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of
crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast
to adapt for mere humans. And most of it represents "change for the sake of change" not some valuable improvement or extension
of capabilities.
If you are a sysadmin, who is writing his own
scripts, you write on the sand beach, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and
diminish the number of possible errors. But the next OS version or organizational change wipes considerable part of your word and you need to revise your
scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn
new staff and maintain their own script library ;-) Sometimes a lot of work is wiped out because the corporate
brass decides to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition. Add to this inevitable technological changes and the question arise,
can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years. For a
talented and not too old person staying employed in sysadmin profession is probably a mistake, or at least a very questionable
decision.
Balkanization of linux demonstrated also in the Babylon Tower of system programming languages (C, C++, Perl, Python, Ruby,
Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add
to this monitoring infrastructure (say Nagios) and you definitely have an information overload.
Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own
and need to do it mostly on your free time, as the workload is substantial in most organizations. Of course summer "dead season" at
least partially exists, but it is rather short. Using free or low cost courses if
they are available, or buying your own books and trying to learn new staff using them is of course is the mark of any good
sysadmin, but should not be the only source of new knowledge. Communication with colleagues who have high level of knowledge in
selected areas is as important or even more important. But this is very difficult as often sysadmin works in isolation.
Professional groups like Linux user group exist mostly in metropolitan areas of large cities. Coronavirus made those groups even
more problematic.
Days when you can for a week
travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which
probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late
1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the
course. Thos days are long in the
past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.
Also from learning "why" the stress now is on learning of "how". Why topic typically are reserved to "advanced" courses.
Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or
even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps
hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires
separate training and separate set of certifications (AWS, Azure). This is a kind of infantilization of profession when a
person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.
Of course. sysadmins are not the only suffered. Computer scientists also now struggle with the excessive level of
complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive
monograph for system programmers (The Art of Computer programming). He was
flattened by the shifting sands and probably will not be able to finish even volume 4 (out of
seven that were planned) in his lifetime.
Of course, much depends on the evolution of
hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM.
Nobody is now surprised to see a server with 128GB of RAM, laptop with 16Gb of RAM, or cellphones with 4GB of RAM and
2GHZ CPU (Please note that IBM Pc stated with 1 MB of RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz)
single core CPU without floating arithmetic unit). Hardware evolution while painful is inevitable and it changes the
software landscape. Thanks God hardware progress
slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and
8GHz CPU clock speed in our lifetimes) and progress now is mostly measured by the number of cores packed in the same die.
The there is other set of significant changes which is course not by progress of hardware (or software) but mainly by fashion
and the desire of certain (and powerful) large corporations to entrench their market position. Such changes are more difficult to accept. It is difficult or even impossible to
predict which technology became fashionable tomorrow. For example how long DevOps will remain in fashion.
Typically such techno-fashion lasts around a decade. After that it typically fades in oblivion, or even is debunked, and former idols shattered
(verification crazy is a nice example here).
Fro example this strange re-invention of the ideas of "glass-walls datacenter" under then banner of DevOps (and old timers still remember
that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and
later for IBM PC) is characterized by the level of hype usually reserved for women fashion. Moreover sometimes it looks to me
that the movie The Devil Wears Prada is a subtle parable on sysadmin work.
Add to this horrible job market, especially for university graduated and older sysadmins (see
Over 50 and unemployed ) and one probably start suspect that the life of
modern sysadmin is far from paradise. When you read some job description on sites like Monster, Dice or Indeed you just
ask yourself, if those people really want to hire anybody, or often such a job position is just a smoke screen for H1B candidates job certification.
The level of details often is so precise that it is almost impossible to fit this specialization. They do not care about
the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.
Also often position are available mostly in place like New York of San Francisco, were both rent and property prices are high and growing while income growth has been stagnant.
Vandalism of Unix performed by Red Hat with RHEL 7 makes the current
environment somewhat unhealthy. It is clear that this was done to enhance Red Hat marketing position, in the interests of the Red
Hat and IBM brass, not in the interest of the community.
This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly
semi-obsolete. And question arise whether it make sense to write any book about RHEL administration other than for a solid advance.
Of course, systemd
generated some backlash, but the position of Red Hat as Microsoft of Linux allows them to shove down the throat their
inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.
Essentially destroying previous Windows interface ecosystem and putting keyboard users into some disadvantage (while preserving binary compatibility).
Red Hat essentially did the same for server sysadmins.
P.P.S. Here are my notes/reflection of sysadmin problems that often arise in rather strange (and sometimes pretty toxic) IT departments of large corporations:
Highly relevant job about life of a sysadmin: "I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities.
His message is that life isn't a funeral march to the grave. It's a polka."
Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY,
JUL 18, 2021 - 09:00 PM
The progressive press had a field day with "woke" Walmart highly
publicized February decision to hikes wages for 425,000 workers to an average above $15 an
hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum
wage workers with machines - will get the same amount of airtime.
As Chain Store
Age reports , Walmart is applying artificial intelligence to the palletizing of products in
its regional distribution centers. I.e., it is replacing thousands of workers with robots.
Since 2017, the discount giant has worked with Symbotic to optimize an automated technology
solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla.,
distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is
either cross-docked or warehoused, while being moved or stored manually. When it's time for the
product to go to a store, a 53-foot trailer is manually packed for transit. After the truck
arrives at a store, associates unload it manually and place the items in the appropriate
places.
Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like
puzzle pieces using high-speed mobile robots that operate with a precision that speeds the
intake process and increases the accuracy of freight being stored for future orders. By using
dense modular storage, the solution also expands building capacity.
In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic
solution creates custom store- and aisle-ready pallets.
Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit
out-of-stocks and increasing the speed of stocking and unloading." More importantly, the
company hopes to further cut expenses and remove even more unskilled labor from its supply
chain.
This solution follows tests of similar automated warehouse solutions at a Walmart
consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter,
Calif.
Walmart plans to implement this technology in 25 of its 42 RDCs.
"Though very few Walmart customers will ever see into our warehouses, they'll still be able
to witness an industry-leading change, each time they find a product on shelves," said Joe
Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve
all the complexities of a global supply chain, but we plan to keep changing the game as we use
technology to transform the way we work and lead our business into the future."
Walmart Brings Automation To Regional Distribution Centers BY TYLER DURDEN SUNDAY,
JUL 18, 2021 - 09:00 PM
The progressive press had a field day with "woke" Walmart highly
publicized February decision to hikes wages for 425,000 workers to an average above $15 an
hour. We doubt the obvious follow up - the ongoing stealthy replacement of many of its minimum
wage workers with machines - will get the same amount of airtime.
As Chain Store
Age reports , Walmart is applying artificial intelligence to the palletizing of products in
its regional distribution centers. I.e., it is replacing thousands of workers with robots.
Since 2017, the discount giant has worked with Symbotic to optimize an automated technology
solution to sort, store, retrieve and pack freight onto pallets in its Brooksville, Fla.,
distribution center. Under Walmart's existing system, product arrives at one of its RDCs and is
either cross-docked or warehoused, while being moved or stored manually. When it's time for the
product to go to a store, a 53-foot trailer is manually packed for transit. After the truck
arrives at a store, associates unload it manually and place the items in the appropriate
places.
Leveraging the Symbiotic solution, a complex algorithm determines how to store cases like
puzzle pieces using high-speed mobile robots that operate with a precision that speeds the
intake process and increases the accuracy of freight being stored for future orders. By using
dense modular storage, the solution also expands building capacity.
In addition, by using palletizing robotics to organize and optimize freight, the Symbiotic
solution creates custom store- and aisle-ready pallets.
Why is Walmart doing this? Simple: According to CSA, "Walmart expects to save time, limit
out-of-stocks and increasing the speed of stocking and unloading." More importantly, the
company hopes to further cut expenses and remove even more unskilled labor from its supply
chain.
This solution follows tests of similar automated warehouse solutions at a Walmart
consolidation center in Colton, Calif., and perishable grocery distribution center in Shafter,
Calif.
Walmart plans to implement this technology in 25 of its 42 RDCs.
"Though very few Walmart customers will ever see into our warehouses, they'll still be able
to witness an industry-leading change, each time they find a product on shelves," said Joe
Metzger, executive VP of supply chain operations at Walmart U.S. "There may be no way to solve
all the complexities of a global supply chain, but we plan to keep changing the game as we use
technology to transform the way we work and lead our business into the future."
But wait: wasn't this recent rise in wages in real terms being propagandized as a new boom
for the working class in the USA by the MSM until some days ago?
And in the drive-through lane at Checkers near Atlanta, requests for Big Buford burgers and
Mother Cruncher chicken sandwiches may be fielded not by a cashier in a headset, but by a
voice-recognition algorithm.
An increase in automation, especially in service industries, may prove to be an economic
legacy of the pandemic. Businesses from factories to fast-food outlets to hotels turned to
technology last year to keep operations running amid social distancing requirements and
contagion fears. Now the outbreak is ebbing in the United States, but the difficulty in hiring
workers -- at least at the wages that employers are used to paying -- is providing new momentum
for automation.
Technological investments that were made in response to the crisis may contribute to a
post-pandemic productivity boom, allowing for higher wages and faster growth. But some
economists say the latest wave of automation could eliminate jobs and erode bargaining power,
particularly for the lowest-paid workers, in a lasting way.
"Once a job is automated, it's pretty hard to turn back," said Casey Warman, an economist at
Dalhousie University in Nova Scotia who has studied automation in the pandemic .
https://www.dianomi.com/smartads.epl?id=3533
The trend toward automation predates the pandemic, but it has accelerated at what is proving
to be a critical moment. The rapid reopening of the economy has led to a surge in demand for
waiters, hotel maids, retail sales clerks and other workers in service industries that had cut
their staffs. At the same time, government benefits have allowed many people to be selective in
the jobs they take. Together, those forces have given low-wage workers a rare moment of
leverage , leading to higher pay
, more generous benefits and other perks.
Automation threatens to tip the advantage back toward employers, potentially eroding those
gains. A
working paper published by the International Monetary Fund this year predicted that
pandemic-induced automation would increase inequality in coming years, not just in the United
States but around the world.
"Six months ago, all these workers were essential," said Marc Perrone, president of the
United Food and Commercial Workers, a union representing grocery workers. "Everyone was calling
them heroes. Now, they're trying to figure out how to get rid of them."
Checkers, like many fast-food restaurants, experienced a jump in sales when the pandemic
shut down most in-person dining. But finding workers to meet that demand proved difficult -- so
much so that Shana Gonzales, a Checkers franchisee in the Atlanta area, found herself back
behind the cash register three decades after she started working part time at Taco Bell while
in high school.
"We really felt like there has to be another solution," she said.
So Ms. Gonzales contacted Valyant AI, a Colorado-based start-up that makes voice recognition
systems for restaurants. In December, after weeks of setup and testing, Valyant's technology
began taking orders at one of Ms. Gonzales's drive-through lanes. Now customers are greeted by
an automated voice designed to understand their orders -- including modifications and special
requests -- suggest add-ons like fries or a shake, and feed the information directly to the
kitchen and the cashier.
The rollout has been successful enough that Ms. Gonzales is getting ready to expand the
system to her three other restaurants.
"We'll look back and say why didn't we do this sooner," she said.
The push toward automation goes far beyond the restaurant sector. Hotels,
retailers ,
manufacturers and other businesses have all accelerated technological investments. In a
survey of nearly 300 global companies by the World Economic Forum last year, 43 percent of
businesses said they expected to reduce their work forces through new uses of
technology.
Some economists see the increased investment as encouraging. For much of the past two
decades, the U.S. economy has struggled with weak productivity growth, leaving workers and
stockholders to compete over their share of the income -- a game that workers tended to lose.
Automation may harm specific workers, but if it makes the economy more productive, that could
be good for workers as a whole, said Katy George, a senior partner at McKinsey, the consulting
firm.
She cited the example of a client in manufacturing who had been pushing his company for
years to embrace augmented-reality technology in its factories. The pandemic finally helped him
win the battle: With air travel off limits, the technology was the only way to bring in an
expert to help troubleshoot issues at a remote plant.
"For the first time, we're seeing that these technologies are both increasing productivity,
lowering cost, but they're also increasing flexibility," she said. "We're starting to see real
momentum building, which is great news for the world, frankly."
Other economists are less sanguine. Daron Acemoglu of the Massachusetts Institute of
Technology said that many of the technological investments had just replaced human labor
without adding much to overall productivity.
In a
recent working paper , Professor Acemoglu and a colleague concluded that "a significant
portion of the rise in U.S. wage inequality over the last four decades has been driven by
automation" -- and he said that trend had almost certainly accelerated in the pandemic.
"If we automated less, we would not actually have generated that much less output but we
would have had a very different trajectory for inequality," Professor Acemoglu said.
Ms. Gonzales, the Checkers franchisee, isn't looking to cut jobs. She said she would hire 30
people if she could find them. And she has raised hourly pay to about $10 for entry-level
workers, from about $9 before the pandemic. Technology, she said, is easing pressure on workers
and speeding up service when restaurants are chronically understaffed.
"Our approach is, this is an assistant for you," she said. "This allows our employee to
really focus" on customers.
Ms. Gonzales acknowledged she could fully staff her restaurants if she offered $14 to $15 an
hour to attract workers. But doing so, she said, would force her to raise prices so much that
she would lose sales -- and automation allows her to take another course.
Rob Carpenter, Valyant's chief executive, noted that at most restaurants, taking
drive-through orders is only part of an employee's responsibilities. Automating that task
doesn't eliminate a job; it makes the job more manageable.
"We're not talking about automating an entire position," he said. "It's just one task within
the restaurant, and it's gnarly, one of the least desirable tasks."
But technology doesn't have to take over all aspects of a job to leave workers worse off. If
automation allows a restaurant that used to require 10 employees a shift to operate with eight
or nine, that will mean fewer jobs in the long run. And even in the short term, the technology
could erode workers' bargaining power.
"Often you displace enough of the tasks in an occupation and suddenly that occupation is no
more," Professor Acemoglu said. "It might kick me out of a job, or if I keep my job I'll get
lower wages."
At some businesses, automation is already affecting the number and type of jobs available.
Meltwich, a restaurant chain that started in Canada and is expanding into the United States,
has embraced a range of technologies to cut back on labor costs. Its grills no longer require
someone to flip burgers -- they grill both sides at once, and need little more than the press
of a button.
"You can pull a less-skilled worker in and have them adapt to our system much easier," said
Ryan Hillis, a Meltwich vice president. "It certainly widens the scope of who you can have
behind that grill."
With more advanced kitchen equipment, software that allows online orders to flow directly to
the restaurant and other technological advances, Meltwich needs only two to three workers on a
shift, rather than three or four, Mr. Hillis said.
Such changes, multiplied across thousands of businesses in dozens of industries, could
significantly change workers' prospects. Professor Warman, the Canadian economist, said
technologies developed for one purpose tend to spread to similar tasks, which could make it
hard for workers harmed by automation to shift to another occupation or industry.
"If a whole sector of labor is hit, then where do those workers go?" Professor Warman said.
Women, and to a lesser degree people of color, are likely to be disproportionately affected, he
added.
The grocery business has long been a source of steady, often unionized jobs for people
without a college degree. But technology is changing the sector. Self-checkout lanes have
reduced the number of cashiers; many stores have simple robots to patrol aisles for spills and
check inventory; and warehouses have become increasingly automated. Kroger in April opened a
375,000-square-foot warehouse with more than 1,000 robots that bag groceries for delivery
customers. The company is even experimenting with delivering groceries by drone.
Other companies in the industry are doing the same. Jennifer Brogan, a spokeswoman for Stop
& Shop, a grocery chain based in New England, said that technology allowed the company to
better serve customers -- and that it was a competitive necessity.
"Competitors and other players in the retail space are developing technologies and
partnerships to reduce their costs and offer improved service and value for customers," she
said. "Stop & Shop needs to do the same."
In 2011, Patrice Thomas took a part-time job in the deli at a Stop & Shop in Norwich,
Conn. A decade later, he manages the store's prepared foods department, earning around $40,000
a year.
Mr. Thomas, 32, said that he wasn't concerned about being replaced by a robot anytime soon,
and that he welcomed technologies making him more productive -- like more powerful ovens for
rotisserie chickens and blast chillers that quickly cool items that must be stored cold.
But he worries about other technologies -- like automated meat slicers -- that seem to
enable grocers to rely on less experienced, lower-paid workers and make it harder to build a
career in the industry.
"The business model we seem to be following is we're pushing toward automation and we're not
investing equally in the worker," he said. "Today it's, 'We want to get these robots in here to
replace you because we feel like you're overpaid and we can get this kid in there and all he
has to do is push this button.'"
Replace man pages with Tealdeer on LinuxTealdeer is a Rust implementation of
tldr, which provides easy-to-understand information about common commands. 21 Jun 2021
Sudeshna Sur (Red Hat,
Correspondent) Feed 10
up Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
More Linux resources
Man pages were my go-to resource when I started exploring Linux. Certainly,
man is the most frequently used command when a beginner starts getting familiar
with the world of the command line. But man pages, with their extensive lists of options and
arguments, can be hard to decipher, which makes it difficult to understand whatever you wanted
to know. If you want an easier solution with example-based output, I think tldr is the best option. What's Tealdeer?
Tealdeer is a wonderful
implementation of tldr in Rust. It's a community-driven man page that gives very simple
examples of how commands work. The best part about Tealdeer is that it has virtually every
command you would normally use.
Install Tealdeer
On Linux, you can install Tealdeer from your software repository. For example, on Fedora :
[ c ] reate a compressed archive and write it to a [ f ] ile, using [ a ] rchive suffix to
determine the compression program:
tar caf target.tar.xz file1 file2 file3
To control the cache:
$ tldr --update
$ tldr --clear-cache
You can give Tealdeer output some color with the --color option, setting it to
always , auto , and never . The default is
auto , but I like the added context color provides, so I make mine permanent with
this addition to my ~/.bashrc file:
alias tldr='tldr --color always'
Conclusion
The beauty of Tealdeer is you don't need a network connection to use it, except when you're
updating the cache. So, even if you are offline, you can still search for and learn about your
new favorite command. For more information, consult the tool's documentation .
Would you use Tealdeer? Or are you already using it? Let us know what you think in the
comments below.
Step 1:
Open
the file using vim editor with command:
$ vim ostechnix.txt
Step 2:
Highlight
the lines that you want to comment out. To do so, go to the line you want to comment and move the cursor to the beginning of a line.
Press
SHIFT+V
to
highlight the whole line after the cursor. After highlighting the first line, press
UP
or
DOWN
arrow
keys or
k
or
j
to
highlight the other lines one by one.
Here is how the lines will look like after highlighting them.
Step 3:
After
highlighting the lines that you want to comment out, type the following and hit
ENTER
key:
:s/^/# /
Please mind
the
space
between
#
and
the last forward slash (
/
).
Now you will see the selected lines are commented out i.e.
#
symbol
is added at the beginning of all lines.
Here,
s
stands
for
"substitution"
.
In our case, we substitute the
caret
symbol
^
(in
the beginning of the line) with
#
(hash).
As we all know, we put
#
in-front
of a line to comment it out.
Step 4:
After
commenting the lines, you can type
:w
to
save the changes or type
:wq
to
save the file and exit.
Let us move on to the next method.
Method 2:
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Set
line numbers by typing the following in vim editor and hit ENTER.
:set number
Step 3:
Then
enter the following command:
:1,4s/^/#
In this case, we are commenting out the lines from
1
to
4
.
Check the following screenshot. The lines from
1
to
4
have
been commented out.
Step 4:
Finally,
unset the line numbers.
:set nonumber
Step 5:
To
save the changes type
:w
or
:wq
to
save the file and exit.
The same procedure can be used for uncommenting the lines in a file. Open the file and set the line numbers as shown in Step 2.
Finally type the following command and hit ENTER at the Step 3:
:1,3s/^#/
After uncommenting the lines, simply remove the line numbers by entering the following command:
:set nonumber
Let us go ahead and see the third method.
Method 3:
This one is similar to Method 2 but slightly different.
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Set
line numbers by typing:
:set number
Step 3:
Type
the following to comment out the lines.
:1,4s/^/# /
The above command will comment out lines from 1 to 4.
Step 4:
Finally,
unset the line numbers by typing the following.
:set nonumber
Method 4:
This method is suggested by one of our reader
Mr.Anand
Nande
in the comment section below.
Step 1:
Open
file in vim editor:
$ vim ostechnix.txt
Step 2:
Go
to the line you want to comment. Press
Ctrl+V
to
enter into
'Visual
block'
mode.
Step 3:
Press
UP
or
DOWN
arrow
or the letter
k
or
j
in
your keyboard to select all the lines that you want to be commented in your file.
Step 4:
Press
Shift+i
to
enter into
INSERT
mode.
This will place your cursor on the first line.
Step 5:
And
then insert
#
(press
Shift+3
)
before your first line.
Step 6:
Finally,
press
ESC
key.
This will insert
#
on
all other selected lines.
As you see in the above screenshot, all other selected lines including the first line are commented out.
Method 5:
This method is suggested by one of our Twitter follower and friend
Mr.Tim
Chase
. We can even target lines to comment out by
regex
.
In other words, we can comment all the lines that contains a specific word.
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Type
the following and press ENTER key:
:g/\Linux/s/^/# /
The above command will comment out all lines that contains the word
"Linux"
.
Replace
"Linux"
with
a word of your choice.
As you see in the above output, all the lines have the word
"Linux"
,
hence all of them are commented out.
And, that's all for now. I hope this was useful. If you know any other method than the given methods here, please let me know in the
comment section below. I will check and add them in the guide.
What if you needed to execute a specific command again, one which you used a while back? And
you can't remember the first character, but you can remember you used the word "serve".
You can use the up key and keep on pressing up until you find your command. (That could take
some time)
Or, you can enter CTRL + R and type few keywords you used in your last command. Linux will
help locate your command, requiring you to press enter once you found your command. The example
below shows how you can enter CTRL + R and then type "ser" to find the previously run "PHP
artisan serve" command. For sure, this tip will help you speed up your command-line
experience.
You can also use the history command to output all the previously stored commands. The
history command will give a list that is ordered in ascending relative to its execution.
In Bash scripting, $? prints the exit status. If it returns zero, it means there is no error. If it is non-zero,
then you can conclude the earlier task has some issue.
If you run the above script once, it will print 0 because the directory does not exist, therefore the script will
create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:
$ ./debug.sh
Testing Debudding
+ a=2
+ b=3
+ c=5
+ DEBUG set +x
+ '[' on == on ']'
+ set +x
2 + 3 = 5
Standard error redirection
You can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute
it in normal Bash commands, as demonstrated below:
Most of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4
option (supported with Bash 4.1 or later). Example below:
Even small to medium-sized companies have some sort of governance surrounding server
decommissioning. They might not call it decommissioning but the process usually goes something
like the following:
Send out a notification or multiple notifications of system end-of-life to
stakeholders
Make complete backups of the entire system and its data
Unplug the system from the network but leave the system running (2-week Scream test)
Shutdown and unplug from power but leave the system racked (2-week incubation
period)
Unracking and palletizing or in some cases recommissioning
(techcrunch.com)
154Countless popular websites including Reddit, Spotify, Twitch, Stack Overflow,
GitHub, gov.uk, Hulu, HBO Max, Quora, PayPal, Vimeo, Shopify, Stripe, and news outlets CNN, The
Guardian, The New York Times, BBC and Financial Times are currently
facing an outage . A glitch at Fastly, a popular CDN provider, is thought to be the reason,
according to a product manager at Financial Times. Fastly has confirmed it's facing an outage
on its status website.
We can display the formatted date from the date string provided by the user using the -d or
""date option to the command. It will not affect the system date, it only parses the requested
date from the string. For example,
$ date -d "Feb 14 1999"
Parsing string to date.
$ date --date="09/10/1960"
Parsing string to date.
Displaying Upcoming Date & Time With -d Option
Aside from parsing the date, we can also display the upcoming date using the -d option with
the command. The date command is compatible with words that refer to time or date values such
as next Sun, last Friday, tomorrow, yesterday, etc. For examples,
Displaying Next Monday
Date
$ date -d "next Mon"
Displaying upcoming date.
Displaying Past Date & Time With -d Option
Using the -d option to the command we can also know or view past date. For
examples,
Displaying Last Friday Date
$ date -d "last Fri"
Displaying past date
Parse Date From File
If you have a record of the static date strings in the file we can parse them in the
preferred date format using the -f option with the date command. In this way, you can format
multiple dates using the command. In the following example, I have created the file that
contains the list of date strings and parsed it with the command.
$ date -f datefile.txt
Parse date from the file.
Setting Date & Time on Linux
We can not only view the date but also set the system date according to your preference. For
this, you need a user with Sudo access and you can execute the command in the following
way.
$ sudo date -s "Sun 30 May 2021 07:35:06 PM PDT"
Display File Last Modification Time
We can check the file's last modification time using the date command, for this we need to
add the -r option to the command. It helps in tracking files when it was last modified. For
example,
"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in
the way food comes to our doors and our tables." The semiautonomous vehicles were
engineered by Kiwibot, a company started in 2017 to game-change the food delivery
landscape...
In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program
funded by the Knight Foundation. The program is driven to understand how residents and
consumers will interact with this type of technology, especially as the trend of robot
servers grows around the country.
And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped
on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and
several others, in the next couple of weeks...
"Our program is completely focused on the residents of Miami-Dade County and the way
they interact with this new technology. Whether it's interacting directly or just sharing
the space with the delivery bots,"
said Carlos Cruz-Casas, with the county's Department of Transportation...
Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are
placed on the front, back and sides of the vehicle, which the supervisors can view on a
computer screen. [A spokesperson says later in the article "there is always a remote and
in-field team looking for the robot."] If crossing the street is necessary, the robot
will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to
allow deliveries up to a mile and a half away so robots can make it to their destinations in
30 minutes or less.
Earlier Kiwi tested its sidewalk-travelling robots around the University of California at
Berkeley, where
at least one of its robots burst into flames . But the Sun-Sentinel reports that "In
about six months, at least 16 restaurants came on board making nearly 70,000
deliveries...
"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles
and Santa Monica by working with the Shopify app to connect businesses that want to employ
their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is
making it possible for Miami-Dade County restaurants to sign on for free."
A video
shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog
on a leash, and at least one potential restaurant customer looking forward to no longer
having to tip human food-delivery workers.
Average but still useful enumeration of factors what should be considered. One question stands out "Is that SaaS app really
cheaper than more headcount?" :-)
Notable quotes:
"... You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter. ..."
"... Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question. ..."
"... Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations. ..."
When introducing a new tool, programming language, or dependency into your environment, what
steps do you take to evaluate it? In this article, I will walk through a six-question framework
I use to make these determinations.
What problem am I trying to solve?
We all get caught up in the minutiae of the immediate problem at hand. An honest, critical
assessment helps divulge broader root causes and prevents micro-optimizations.
Let's say you are experiencing issues with your configuration management system. Day-to-day
operational tasks are taking longer than they should, and working with the language is
difficult. A new configuration management system might alleviate these concerns, but make sure
to take a broader look at this system's context. Maybe switching from virtual machines to
immutable containers eases these issues and more across your environment while being an
equivalent amount of work. At this point, you should explore the feasibility of more
comprehensive solutions as well. You may decide that this is not a feasible project for the
organization at this time due to a lack of organizational knowledge around containers, but
conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next
quarter.
This intellectual exercise helps you drill down to the root causes and solve core issues,
not the symptoms of larger problems. This is not always going to be possible, but be
intentional about making this decision.
Now that we have identified the problem, it is time for critical evaluation of both
ourselves and the selected tool.
A particular technology might seem appealing because it is new because you read a cool blog
post about it or you want to be the one giving a conference talk. Bells and whistles can be
nice, but the tool must resolve the core issues you identified in the first
question.
What am I giving up?
The tool will, in fact, solve the problem, and we know we're solving the right
problem, but what are the tradeoffs?
These considerations can be purely technical. Will the lack of observability tooling prevent
efficient debugging in production? Does the closed-source nature of this tool make it more
difficult to track down subtle bugs? Is managing yet another dependency worth the operational
benefits of using this tool?
Additionally, include the larger organizational, business, and legal contexts that you
operate under.
Are you giving up control of a critical business workflow to a third-party vendor? If that
vendor doubles their API cost, is that something that your organization can afford and is
willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of
proprietary information? Does the software licensing make this difficult to use
commercially?
While not simple questions to answer, taking the time to evaluate this upfront will save you
a lot of pain later on.
Is the project or vendor healthy?
This question comes with the addendum "for the balance of your requirements." If you only
need a tool to get your team over a four to six-month hump until Project X is complete,
this question becomes less important. If this is a multi-year commitment and the tool drives a
critical business workflow, this is a concern.
When going through this step, make use of all available resources. If the solution is open
source, look through the commit history, mailing lists, and forum discussions about that
software. Does the community seem to communicate effectively and work well together, or are
there obvious rifts between community members? If part of what you are purchasing is a support
contract, use that support during the proof-of-concept phase. Does it live up to your
expectations? Is the quality of support worth the cost?
Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as
well. Something might hit the front page of a news aggregator and receive attention for a few
days, but a deeper look might reveal that only a couple of core developers are actually working
on a project, and they've had difficulty finding outside contributions. Maybe a tool is open
source, but a corporate-funded team drives core development, and support will likely cease if
that organization abandons the project. Perhaps the API has changed every six months, causing a
lot of pain for folks who have adopted earlier versions.
What are the risks?
As a technologist, you understand that nothing ever goes as planned. Networks go down,
drives fail, servers reboot, rows in the data center lose power, entire AWS regions become
inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.
Ask yourself how this tooling could fail and what the impact would be. If you are adding a
security vendor product to your CI/CD pipeline, what happens if the vendor goes
down?
This brings up both technical and business considerations. Do the CI/CD pipelines simply
time out because they can't reach the vendor, or do you have it "fail open" and allow the
pipeline to complete with a warning? This is a technical problem but ultimately a business
decision. Are you willing to go to production with a change that has bypassed the security
scanning in this scenario?
Obviously, this task becomes more difficult as we increase the complexity of the system.
Thankfully, sites like k8s.af consolidate example
outage scenarios. These public postmortems are very helpful for understanding how a piece of
software can fail and how to plan for that scenario.
What are the costs?
The primary considerations here are employee time and, if applicable, vendor cost. Is that
SaaS app cheaper than more headcount? If you save each developer on the team two hours a day
with that new CI/CD tool, does it pay for itself over the next fiscal year?
Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral
if you save the dev team a couple of hours a day, but you're removing a huge blocker in their
daily workflow, and they would be much happier for it. That happiness is likely worth the
financial cost. Onboarding new developers is costly, so don't underestimate the value of
increased retention when making these calculations.
I hope you've found this framework insightful, and I encourage you to incorporate it into
your own decision-making processes. There is no one-size-fits-all framework that works for
every decision. Don't forget that, sometimes, you might need to go with your gut and make a
judgment call. However, having a standardized process like this will help differentiate between
those times when you can critically analyze a decision and when you need to make that leap.
We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned
out.
Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see
the data. You have to hook them all up to a box and reassemble the RAID.
We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS.
You can use parted to see what drives are now being seen by your linux system:
parted -l | grep 'raid\|sd'
Then using that output, we assembled the drives into a software array:
mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2
If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.
If the last command went well, you can see the built array like so:
root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding
Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails,
your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:
pvdisplay /dev/md0
Gives us:
-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3
Then we find the logical volume:
lvdisplay /dev/vg0
Gives us:
-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384
-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384
We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.
ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit
Now we can mount with:
mount /dev/vg0/lv0 /mnt
And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different
from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.
Installing the recent linux version seems to come with a default setting of flooding the
/var/log/messages with entirely annoying duplicitous messages like:
systemd: Created slice user-0.slice.
systemd: Starting Session 1013 of user root.
systemd: Started Session 1013 of user root.
systemd: Created slice user-0.slice.
systemd: Starting Session 1014 of user root.
systemd: Started Session 1014 of user root.
Here is how I got rid of these:
vi /etc/systemd/system.conf
And then uncomment LogLevel and make it: LogLevel=notice
1 # This file is part of systemd.
2 #
3 # systemd is free software; you can redistribute it and/or modify it
4 # under the terms of the GNU Lesser General Public License as published by
5 # the Free Software Foundation; either version 2.1 of the License, or
6 # (at your option) any later version.
7 #
8 # Entries in this file show the compile time defaults.
9 # You can change settings by editing this file.
10 # Defaults can be restored by simply deleting this file.
11 #
12 # See systemd-system.conf(5) for details.
13
14 [Manager]
15 LogLevel=notice
16 #LogTarget=journal-or-kmsg
There are a number of ways to loop within a script. Use for when you want to loop a preset
number of times. For example:
#!/bin/bash
for day in Sun Mon Tue Wed Thu Fri Sat
do
echo $day
done
or
#!/bin/bash
for letter in {a..z}
do
echo $letter
done
Use while when you want to loop as long as some condition exists or doesn't exist.
#!/bin/bash
n=1
while [ $n -le 4 ]
do
echo $n
((n++))
done
Using case statements
Case statements allow your scripts to react differently depending on what values are being
examined. In the script below, we use different commands to extract the contents of the file
provided as an argument by identifying the file type.
#!/bin/bash
if [ $# -eq 0 ]; then
echo -n "filename> "
read filename
else
filename=$1
fi
if [ ! -f "$filename" ]; then
echo "No such file: $filename"
exit
fi
case $filename in
*.tar) tar xf $filename;;
*.tar.bz2) tar xjf $filename;;
*.tbz) tar xjf $filename;;
*.tbz2) tar xjf $filename;;
*.tgz) tar xzf $filename;;
*.tar.gz) tar xzf $filename;;
*.gz) gunzip $filename;;
*.bz2) bunzip2 $filename;;
*.zip) unzip $filename;;
*.Z) uncompress $filename;;
*.rar) rar x $filename ;;
*) echo "No extract option for $filename"
esac
Note that this script also prompts for a file name if none was provided and then checks to
make sure that the file specified actually exists. Only after that does it bother with the
extraction.
Reacting to errors
You can detect and react to errors within scripts and, in doing so, avoid other errors. The
trick is to check the exit codes after commands are run. If an exit code has a value other than
zero, an error occurred. In this script, we look to see if Apache is running, but send the
output from the check to /dev/null . We then check to see if the exit code isn't equal to zero
as this would indicate that the ps command did not get a response. If the exit code is
not zero, the script informs the user that Apache isn't running.
#!/bin/bash
ps -ef | grep apache2 > /dev/null
if [ $? != 0 ]; then
echo Apache is not running
exit
fi
Those shortcuts belong to the class of commands known as bang commands . Internet
search for this term provides a wealth of additional information (which probably you do not
need ;-), I will concentrate on just most common and potentially useful in the current command
line environment bang commands. Of them !$ is probably the most useful and definitely
is the most widely used. For many sysadmins it is the only bang command that is regularly
used.
!! is the bang command that re-executes the last command . This command is used
mainly as a shortcut sudo !! -- elevation of privileges after your command failed
on your user account. For example:
fgrep 'kernel' /var/log/messages # it will fail due to unsufficient privileges, as /var/log directory is not readable by ordinary user
sudo !! # now we re-execute the command with elevated privileges
!$ puts into the current command line the last argument from previous command . For
example:
mkdir -p /tmp/Bezroun/Workdir
cd !$
In this example the last command is equivalent to the command cd /tmp/Bezroun/Workdir. Please
try this example. It is a pretty neat trick.
NOTE: You can also work with individual arguments using numbers.
!:1 is the previous command and its options
!:2 is the first argument of the previous command
!:3 is the second
And so on
For example:
cp !:2 !:3 # picks up the first and the second argument from the previous command
For this and other bang command capabilities, copying fragments of the previous command line
using mouse is much more convenient, and you do not need to remember extra staff. After all, band
commands were created before mouse was available, and most of them reflect the realities and needs
of this bygone era. Still I met sysadmins that use this and some additional capabilities like
!!:s^<old>^<new> (which replaces the string 'old' with the string 'new" and
re-executes previous command) even now.
The same is true for !* -- all arguments of the last command. I do not use them and
have had troubles writing this part of this post, correcting it several times to make it right
4/0
Nowadays CTRL+R activates reverse search, which provides an easier way to
navigate through your history then capabilities in the past provided by band commands.
"The bots' mission: To deliver restaurant meals cheaply and efficiently, another leap in
the way food comes to our doors and our tables." The semiautonomous vehicles were
engineered by Kiwibot, a company started in 2017 to game-change the food delivery
landscape...
In May, Kiwibot sent a 10-robot fleet to Miami as part of a nationwide pilot program
funded by the Knight Foundation. The program is driven to understand how residents and
consumers will interact with this type of technology, especially as the trend of robot
servers grows around the country.
And though Broward County is of interest to Kiwibot, Miami-Dade County officials jumped
on board, agreeing to launch robots around neighborhoods such as Brickell, downtown Miami and
several others, in the next couple of weeks...
"Our program is completely focused on the residents of Miami-Dade County and the way
they interact with this new technology. Whether it's interacting directly or just sharing
the space with the delivery bots,"
said Carlos Cruz-Casas, with the county's Department of Transportation...
Remote supervisors use real-time GPS tracking to monitor the robots. Four cameras are
placed on the front, back and sides of the vehicle, which the supervisors can view on a
computer screen. [A spokesperson says later in the article "there is always a remote and
in-field team looking for the robot."] If crossing the street is necessary, the robot
will need a person nearby to ensure there is no harm to cars or pedestrians. The plan is to
allow deliveries up to a mile and a half away so robots can make it to their destinations in
30 minutes or less.
Earlier Kiwi tested its sidewalk-travelling robots around the University of California at
Berkeley, where
at least one of its robots burst into flames . But the Sun-Sentinel reports that "In
about six months, at least 16 restaurants came on board making nearly 70,000
deliveries...
"Kiwibot now offers their robotic delivery services in other markets such as Los Angeles
and Santa Monica by working with the Shopify app to connect businesses that want to employ
their robots." But while delivery fees are normally $3, this new Knight Foundation grant "is
making it possible for Miami-Dade County restaurants to sign on for free."
A video
shows the reactions the sidewalk robots are getting from pedestrians on a sidewalk, a dog
on a leash, and at least one potential restaurant customer looking forward to no longer
having to tip human food-delivery workers.
Customers wouldn't have to train the algorithm on their own boxes because the robot was made
to recognize boxes of different sizes, textures and colors. For example, it can recognize both
shrink-wrapped cases and cardboard boxes.
... Stretch is part of a growing market of warehouse robots made by companies such as 6
River Systems Inc., owned by e-commerce technology company Shopify Inc., Locus Robotics Corp. and Fetch
Robotics Inc. "We're anticipating exponential growth (in the market) over the next five years,"
said Dwight Klappich, a supply chain research vice president and fellow at tech research firm
Gartner Inc.
As fast-food restaurants and small businesses struggle to find low-skilled workers to staff
their kitchens and cash registers, America's biggest fast-food franchise is seizing the
opportunity to field test a concept it has been working toward for some time: 10 McDonald's
restaurants in Chicago are testing automated drive-thru ordering using new artificial
intelligence software that converts voice orders for the computer.
McDonald's CEO Chris Kempczinski said Wednesday during an appearance at Alliance Bernstein's
Strategic Decisions conference that the new voice-order technology is about 85% accurate and
can take 80% of drive-thru orders. The company obtained the technology during its 2019
acquisition of Apprente.
The introduction of automation and artificial intelligence into the industry will eventually
result in entire restaurants controlled without humans - that could happen as early as the end
of this decade. As for McDonald's, Kempczinski said the technology will likely take more than
one or two years to implement.
"Now there's a big leap from going to 10 restaurants in Chicago to 14,000 restaurants
across the US, with an infinite number of promo permutations, menu permutations, dialect
permutations, weather -- and on and on and on, " he said.
McDonald's is also exploring automation of its kitchens, but that technology likely won't be
ready for another five years or so - even though it's capable of being introduced soooner.
McDonald's has also been looking into automating more of the kitchen, such as its fryers
and grills, Kempczinski said. He added, however, that that technology likely won't roll out
within the next five years, even though it's possible now.
"The level of investment that would be required, the cost of investment, we're nowhere
near to what the breakeven would need to be from the labor cost standpoint to make that a
good business decision for franchisees to do," Kempczinski said.
And because restaurant technology is moving so fast, Kempczinski said, McDonald's won't
always be able to drive innovation itself or even keep up. The company's current strategy is
to wait until there are opportunities that specifically work for it.
"If we do acquisitions, it will be for a short period of time, bring it in house,
jumpstart it, turbo it and then spin it back out and find a partner that will work and scale
it for us," he said.
On Friday, Americans will receive their first broad-based update on non-farm employment in
the US since last month's report, which missed expectations by a wide margin, sparking
discussion about whether all these "enhanced" monetary benefits from federal stimulus programs
have kept workers from returning to the labor market.
DNF uses "libsolv' for dependency resolution, developed and maintained by SUSE.
YUM uses the public API for dependency resolution
2
API is fully documented
API is not fully documented
3
It is written in C, C++, Python
It is written only in Python
4
DNF is currently used in Fedora, Red Hat Enterprise Linux 8 (RHEL), CentOS 8, OEL 8 and Mageia 6/7.
YUM is currently used in Red Hat Enterprise Linux 6/7 (RHEL), CentOS 6/7, OEL 6/7.
5
DNf supports various extensions
Yum supports only Python-based extension
6
The API is well documented so it's easy to create new features
It is very difficult to create new features because the API is not properly documented.
7
The DNF uses less memory when synchronizing the metadata of the repositories.
The YUM uses excessive memory when synchronizing the metadata of the repositories.
8
DNF uses a satisfiability algorithm to solve dependency resolution (It's using a dictionary approach to store and retrieve
package and dependency information).
Yum dependency resolution gets sluggish due to public API.
9
All performance is good in terms of memory usage and dependency resolution of repository metadata.
Overall performance is poor in terms of many factors.
10
DNF Update: If a package contains irrelevant dependencies during a DNF update process, the package will not be updated.
YUM will update a package without verifying this.
S.No
DNF (Dandified YUM)
YUM (Yellowdog Updater, Modified)
11
If the enabled repository does not respond, dnf will skip it and continue the transaction with the available repositories.
If a repository is not available, YUM will stop immediately.
12
dnf update and dnf upgrade are equals.
It's different in yum
13
The dependencies on package installation are not updated
Yum offered an option for this behavior
14
Clean-up Package Removal: When removing a package, dnf automatically removes any dependency packages not explicitly installed
by the user.
Yum didn't do this
15
Repo Cache Update Schedule: By default, ten minutes after the system boots, updates to configured repositories are checked
by dnf hourly. This action is controlled by the system timer unit named "/usr/lib/systemd/system/dnf-makecache.timer".
Yum do this too.
16
Kernel packages are not protected by dnf. Unlike Yum, you can delete all kernel packages, including one that runs.
Yum will not allow you to remove the running kernel
17
libsolv: for solving packages and reading repositories.
hawkey: hawkey, library providing simplified C and Python API to libsolv.
librepo: library providing C and Python (libcURL like) API for downloading linux repository metadata and packages.
libcomps: Libcomps is alternative for yum.comps library. It's written in pure C as library and there's bindings for python2
and python3
Yum does not use separate libraries to perform this function.
18
DNF contains 29k lines of code
Yum contains 56k lines of code
19
DNF was developed by Ales Kozumplik
YUM was developed by Zdenek Pavlas, Jan Silhan and team members
Closing Notes
In this guide, we have shown you several differences between DNF and YUM.
If you have any questions or feedback, feel free to comment below.
Stack Overflow co-founder Joel Spolsky blogged about the
purchase, and Stack Overflow CEO Prasanth Chandrasekar wrote a more official announcement .
Both blog posts characterize the acquisition as having little to no impact on the day-to-day
operation of Stack Overflow.
"How you use our site and our products will not change in the coming weeks or months, just
as our company's goals and strategic priorities remain the same," Chandrasekar said.
Spolsky went into more detail, saying that Stack Overflow will "continue to operate
independently, with the exact same team in place that has been operating it, according to the
exact same plan and the exact same business practices. Don't expect to see major changes or
awkward 'synergies'... the entire company is staying in place: we just have different owners
now."
Lot of people here seem to know an awful lot about a company they only just learnt about from
this article, funny that.
We don't know Prosus but we have the experience of dozens of other acquisitions made with
the statement that "nothing will change" ... until it always does.
At least it wasn't acquired by Google so they could turn it into a chat program and then
shut it down.
4.0 out of 5 stars
Everyone is on a learning curve Reviewed in the United States on February 3, 2009 The author was a programmer before, so in
writing this book, he draw both from his personal experience and his observation to depict the software world.
I think this is more of a practice and opinion book rather than "Philosophy" book, however I have to agree with him in most
cases.
For example, here is Mike Gancarz's line of thinking:
1. Hard to get the s/w design right at the first place, no matter who.
2. So it's better to write a short specs without considering all factors first.
3. Build a prototype to test the assumptions
4. Use an iterative test/rewrite process until you get it right
5. Conclusion: Unix evolved from a prototype.
In case you are curious, here are the 9 tenets of Unix/Linux:
1. Small is beautiful.
2. Make each program do one thing well.
3. Build a prototype as soon as possible.
4. Choose portability over efficiency.
5. Store data in flat text files.
6. Use software leverage to your advantage.
7. Use shell scripts to increase leverage and portability.
8. Avoid captive user interfaces.
9. Make every program a filter.
Mike Gancarz told a story like this when he argues "Good programmers write good code; great programmers borrow good code".
"I recall a less-than-top-notch software engineer who couldn't program his way out of a paper bag. He had a knack, however,
for knitting lots of little modules together. He hardly ever wrote any of them himself, though. He would just fish around in the
system's directories and source code repositories all day long, sniffing for routines he could string together to make a complete
program. Heaven forbid that he should have to write any code. Oddly enough, it wasn't long before management recognized him as
an outstanding software engineer, someone who could deliver projects on time and within budget. Most of his peers never realized
that he had difficulty writing even a rudimentary sort routine. Nevertheless, he became enormously successful by simply using
whatever resources were available to him."
If this is not clear enough, Mike also drew analogies between Mick Jagger and Keith Richards and Elvis. The book is full of
inspiring stories to reveal software engineers' tendencies and to correct their mindsets.
I've found a disturbing
trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way
it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the
software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes
Ubuntu Unity ,
Gnome Shell ,
KDE 4 , the
/usr partition ,
SELinux ,
PolicyKit ,
Systemd ,
udev and
PulseAudio , to name a few.
The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain
Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from
that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.
Clearly this situation is untenable.
The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no
sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed
more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers,
or at least more of the existing developers to adopt the right methods.
This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users,
with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a
large influx of users who were, by and large, purely consumers, not contributors.
As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that
entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated,
consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their
belligerently hostile attitudes toward the GPL, FSF,
GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy
predilection for proprietary software, meaningless aesthetics and hype.
Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated
into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is
only too happy
to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard
Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous
monolithic blob , and the latter of whom successfully managed
to undermine 40 years of UNIX security in a single stroke, by
obliterating the principle that unprivileged
users should not be allowed to install software system-wide.
GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true
when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations,
misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.
Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts
of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently
set about stuffing GNU/Linux with as much bloated, patent
encumbered garbage as they could muster.
I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight,
and we were largely successful. Some have even suggested that my
diatribes ,
articles and
debates (with Miguel
de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.
Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes
sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for
something that's free (in the commercial sense).
Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support
contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform,
for example).
Make each program do one thing well. To do a new job, build afresh rather than
complicate old programs by adding new features.
By now, and to be frank in the last 30 years too, this is complete and utter bollocks.
Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from
formatting to "side" features, all half-assed and barely, if at all, consistent.
By now, and to be frank in the last 30 years too, this is complete and utter
bollocks.
There is not one single other idea in computing that is as unbastardised as the unix
philosophy - given that it's been around fifty years. Heck, Microsoft only just developed
PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what
is.
In that same time, we've vacillated between thick and thin computing (mainframes, thin
clients, PCs, cloud). We've rebelled against at least four major schools of program design
thought (structured, procedural, symbolic, dynamic). We've had three different database
revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate
dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video
gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins
('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros
('10s "startup culture").
It's a small miracle that iproute2only has formatting options and
grep only has --color . If they feature-crept anywhere near the same
pace as the rest of the computing world, they would probably be a RESTful SaaS microservice
with ML-powered autosuggestions.
This is because adding a new features is actually easier than trying to figure out how
to do it the Unix way - often you already have the data structures in memory and the
functions to manipulate them at hand, so adding a --frob parameter that does
something special with that feels trivial.
GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he
didn't care about it) while becoming the most available set of tools for Unix systems
didn't help either.
No, it certainly isn't. There are tons of well-designed, single-purpose tools
available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps,
well, that's your prerogative, and I don't begrudge you it, but just because you're not
aware of alternatives doesn't mean they don't exist.
typical shell tools are choke-full of spurious additions,
What does "feature creep" even mean with respect to shell tools? If they have lots of
features, but each function is well-defined and invoked separately, and still conforms to
conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is
BusyBox bloatware because it has lots of discrete shell tools bundled into a single
binary? nirreskeya
3 years ago
I have succumbed to the temptation you offered in your preface: I do write you off
as envious malcontents and romantic keepers of memories. The systems you remember so
fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out
to pasture, they are fertilizing it from below.
Your judgments are not keen, they are intoxicated by metaphor. In the Preface you
suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag.
In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and
addled by puffiness of the genome.
Yet your prison without coherent design continues to imprison you. How can this
be, if it has no strong places? The rational prisoner exploits the weak places,
creates order from chaos: instead, collectives like the FSF vindicate their jailers
by building cells almost compatible with the existing ones, albeit with more
features. The journalist with three undergraduate degrees from MIT, the researcher at
Microsoft, and the senior scientist at Apple might volunteer a few words about the
regulations of the prisons to which they have been transferred.
Your sense of the possible is in no sense pure: sometimes you want the same thing
you have, but wish you had done it yourselves; other times you want something
different, but can't seem to get people to use it; sometimes one wonders why you just
don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice,
just a future whose intellectual tone and interaction style is set by Sonic the
Hedgehog. You claim to seek progress, but you succeed mainly in whining.
Here is my metaphor: your book is a pudding stuffed with apposite observations,
many well-conceived. Like excrement, it contains enough undigested nuggets of
nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of
contempt and of envy.
Bon appetit!
[Jun 01, 2021] "ls' command by Last Modified Date and Time
Moreover you just choose those relevant and not all options. E.g.,
ls -l --time-style=+%H
will show only hour.
ls -l --time-style=+%H:%M:%D
will show Hour, Minute and date.
# ls -l --time-style=full-iso
# ls -l --time-style=long-iso
# ls -l --time-style=iso
# ls -l --time-style=locale
# ls -l --time-style=+%H:%M:%S:%D
# ls --full-time
2. Output the contents of a directory in various formats such as separated by commas, horizontal, long, vertical, across, etc.
Contents of directory can be listed using
ls command
in various format as suggested below.
across
comma
horizontal
long
single-column
verbose
vertical
# ls ""-format=across
# ls --format=comma
# ls --format=horizontal
# ls --format=long
# ls --format=single-column
# ls --format=verbose
# ls --format=vertical
3. Use ls command to append indicators like (/=@|) in output to the contents of the directory.
The option
-p
with "
ls
" command will server the purpose.
It will append one of the above indicator, based upon the type of file.
# ls -p
4. Sort the contents of directory on the basis of extension, size, time and version.
We can use options like
--extension
to sort the output by extension, size by extension
--size
, time by using extension
-t
and version using extension
-v
.
Also we can use option
--none
which will output in general way without any sorting in actual.
# ls --sort=extension
# ls --sort=size
# ls --sort=time
# ls --sort=version
# ls --sort=none
5. Print numeric UID and GID for every contents of a directory using ls command.
The above scenario can be achieved using flag
-n
(Numeric-uid-gid) along with
ls
command.
# ls -n
6. Print the contents of a directory on standard output in more columns than specified by default.
Well
ls
command output the contents of a directory
according to the size of the screen automatically.
We can however manually assign the value of screen width and control number of columns appearing. It can be done using switch "
--width
".
# ls --width 80
# ls --width 100
# ls --width 150
Note
: You can experiment what value
you should pass with
width
flag.
7. Include manual tab size at the contents of directory listed by ls command instead of default 8.
Why? Wall Street will love it. They love macho "transformations'. By sheer executive fiat Things Will Change, for sure.
Throw in "technology' and it makes Wall Street puff up that little bit more.
The fact that virtually no analyst or serious buyer of stocks has the first idea of what's involved in such a transformation is
irrelevant. They will lap it up.
This is how capitalism works, and it indisputably results in the most efficient allocation of resources possible.
A Dash of Layoffs, a Sprinkling of Talent
These analysts and buyers will assume there will be reductions to employee headcount sooner rather than later, which of course will
make the transformation go faster and beat a quick path to profit.
Hires of top "industry experts' who know the magic needed to get all this done, and who will be able to pass on their wisdom without
friction to the eager staff that remain, will make this a sure thing.
In the end, of course, you don't want to come out of this looking too bad, do you?
So how best to minimise any fallout from this endeavour?
Leadership
The first thing you should do is sort out the leadership of this transformation.
Hire in a senior executive specifically for the purpose of making this transformation happen.
Well, taking responsibility for it, at least. This will be useful later when you need a scapegoat for failure.
Ideally it will be someone with a long resume of similar transformational senior roles at different global enterprises.
Don't be concerned with whether those previous roles actually resulted in any lasting change or business success; that's not the
point. The point is that they have a lot of experience with this kind of role, and will know how to be the patsy. Or you can get
someone that has
Dunning-Kruger
syndrome
so they can truly inhabit the role.
Make sure this executive is adept at managing his (also hired-in) subordinates in a divide-and-conquer way, so their aims are never
aligned, or multiply-aligned in diverse directions in a 4-dimensional ball of wool.
Incentivise senior leadership to grow their teams rather than fulfil the overall goal of the program (ideally, the overall goal will
never be clearly stated by anyone "" see Strategy, below).
Change your CIO halfway through the transformation. The resulting confusion and political changes of direction will ensure millions
are lost as both teams and leadership chop and change positions.
With a bit of luck, there'll be so little direction that the core business can be unaffected.
Strategy
This second one is easy enough. Don't have a strategy. Then you can chop and change plans as you go without any kind of overall
direction, ensuring (along with the leadership anarchy above) that nothing will ever get done.
Unfortunately, the world is not sympathetic to this reality, so you will have to pretend to have a strategy, at the very least. Make
the core PowerPoint really dense and opaque. Include as many buzzwords as possible "" if enough are included people will assume you
know what you are doing. It helps if the buzzwords directly contradict the content of the strategy documents.
It's also essential that the strategy makes no mention of the "customer', or whatever provides
Vandelay's
revenue,
or why the changes proposed make any difference to the business at all. That will help nicely reduce any sense of urgency to the
whole process.
Try to make any stated strategy:
hopelessly optimistic (set ridiculous and arbitrary deadlines)
inflexible from the start (aka "my way, or the highway')
Whatever strategy you pretend to pursue, be sure to make it "Go big, go early', so you can waste as much money as fast as possible.
Don't waste precious time learning about how change can get done in your context. Remember, this needs to fail once you're gone.
Technology Architecture
First, set up a completely greenfield "Transformation Team' separate from your existing staff. Then, task them with solving every
possible problem in your business at once. Throw in some that don't exist yet too, if you like! Force them to coordinate tightly
with every other team and fulfil all their wishes.
Ensure your security and control functions are separated from (and, ideally, in some kind of war with) a Transformation Team that is
siloed as far as possible from the mainstream of the business. This will create the perfect environment for expensive white
elephants to be built that no-one will use.
All this taken together will ensure that the Transformation Team's plans have as little chance of getting to production as possible.
Don't give security and control functions any responsibility or reward for delivery, just reward them for blocking change.
Ignore the "decagon of despair'. These things are nothing to do with Transformation, they are just blockers people like to talk
about. The official line is that hiring Talent (see below) will take care of those. It's easy to exploit an organisation's
insecurity about its capabilities to downplay the importance of these
Boston Dynamics, a robotics company known for its four-legged robot "dog," this week
announced a new product, a computer-vision enabled mobile warehouse robot named "Stretch."
Developed in response to growing demand for automation in warehouses, the robot can reach up
to 10 feet inside of a truck to pick up and unload boxes up to 50 pounds each. The robot has a
mobile base that can maneuver in any direction and navigate obstacles and ramps, as well as a
robotic arm and a gripper. The company estimates that there are more than 500 billion boxes
annually that get shipped around the world, and many of those are currently moved manually.
"It's a pretty arduous job, so the idea with Stretch is that it does the manual labor part
of that job," said Robert Playter, chief executive of the Waltham, Mass.-based company.
The pandemic has accelerated [automation of] e-commerce and logistics operations even more
over the past year, he said.
... ... ...
... the robot was made to recognize boxes of different sizes, textures and colors. For
example, it can recognize both shrink-wrapped cases and cardboard boxes.
Eventually, Stretch could move through an aisle of a warehouse, picking up different
products and placing them on a pallet, Mr. Playter said.
To list all open files, run the lsof command without any arguments:
lsof
For example, Here is the screengrab of a part of the output the above command produced on my system:
The first column represents the process while the last column contains the file name. For details on all the columns, head to
the command's man page .
2. How to list files opened by processes belonging to a specific user
The tool also allows you to list files opened by processes belonging to a specific user. This feature can be accessed by using
the -u command-line option.
lsof -u [user-name]
For example:
lsof -u administrator
3. How to list files based on their Internet address
The tool lets you list files based on their Internet address. This can be done using the -i command-line option. For example,
if you want, you can have IPv4 and IPv6 files displayed separately. For IPv4, run the following command:
lsof -i 4
...
4. How to list all files by application name
The -c command-line option allows you to get all files opened by program name.
$ lsof -c apache
You do not have to use the full program name as all programs that start with the word 'apache' are shown. So in our case, it will
list all processes of the 'apache2' application.
The -c option is basically just a shortcut for the two commands:
$ lsof | grep apache
5. How to list files specific to a process
The tool also lets you display opened files based on process identification (PID) numbers. This can be done by using the -p
command-line option.
lsof -p [PID]
For example:
lsof -p 856
Moving on, you can also exclude specific PIDs in the output by adding the ^ symbol before them. To exclude a specific PID, you
can run the following command:
lsof -p [^PID]
For example:
lsof -p ^1
As you can see in the above screenshot, the process with id 1 is excluded from the list.
6. How to list IDs of processes that have opened a particular file
The tool allows you to list IDs of processes that have opened a particular file. This can be done by using the -t command
line option.
If you want, you can also make lsof search for all open instances of a directory (including all the files and directories it contains).
This feature can be accessed using the +D command-line option.
$ lsof +D [directory-path]
For example:
$ lsof +D /usr/lib/locale
8. How to list all Internet and x.25 (HP-UX) network files
This is possible by using the -i command-line option we described earlier. Just that you have to use it without any arguments.
$ lsof -i
9. Find out which program is using a port
The -i switch of the command allows you to find a process or application which listens to a specific port number. In the example
below, I checked which program is using port 80.
$ lsof -i :80
Instead of the port number, you can use the service name as listed in the /etc/services file. Example to check which app
listens on the HTTPS (443) port:
$ lsof -i :https
... ... ...
The above examples will check both TCP and UDP. If you like to check for TCP or UDP only, prepend the word 'tcp' or 'udp'. For
example, which application is using port 25 TCP:
$ lsof -i tcp:25
or which app uses UDP port 53:
$ lsof -i udp:53
10. How to list open files based on port range
The utility also allows you to list open files based on a specific port or port range. For example, to display open files for
port 1-1024, use the following command:
$ lsof -i :1-1024
11. How to list open files based on the type of connection (TCP or UDP)
The tool allows you to list files based on the type of connection. For example, for UDP specific files, use the following command:
$ lsof -i udp
Similarly, you can make lsof display TCP-specific files.
12. How to make lsof list Parent PID of processes
There's also an option that forces lsof to list the Parent Process IDentification (PPID) number in the output. The option in question
is -R .
$ lsof -R
To get PPID info for a specific PID, you can run the following command:
$ lsof -p [PID] -R
For example:
$ lsof -p 3 -R
13. How to find network activity by user
By using a combination of the -i and -u command-line options, we can search for all network connections of a Linux user. This
can be helpful if you inspect a system that might have been hacked. In this example, we check all network activity of the user www-data:
$ lsof -a -i -u www-data
14. List all memory-mapped files
This command lists all memory-mapped files on Linux.
$ lsof -d mem
15. List all NFS files
The -N option shows you a list of all NFS (Network File System) files.
$lsof -N
Conclusion
Although lsof offers a plethora of options, the ones we've discussed here should be enough to get you started. Once you're done
practicing with these, head to the tool's man page to learn more about
it. Oh, and in case you have any doubts and queries, drop in a comment below.
Himanshu Arora has been working on Linux since 2007. He carries professional experience in system level programming, networking
protocols, and command line. In addition to HowtoForge, Himanshu's work has also been featured in some of world's other leading publications
including Computerworld, IBM DeveloperWorks, and Linux Journal.
Great article! Another useful one is "lsof -i tcp:PORT_NUMBER" to list processes happening on a specific port, useful for node.js
when you need to kill a process.
Ex: lsof -i tcp:3000
then say you want to kill the process 5393 (PID) running on port 3000, you would run "kill -9 5393"
Most (if not every) Linux distributions come with an editor that allows you to perform hexadecimal and binary manipulation. One
of those tools is the command-line tool "" xxd , which is most commonly used to make a hex dump of a given file or standard input.
It can also convert a hex dump back to its original binary form.
Hexedit Hex Editor
Hexedit is another hexadecimal command-line editor that might already be preinstalled on your OS.
Images removed. See the original for the full text.
Notable quotes:
"... You might also mention !? It finds the last command with its' string argument. For example, if" ..."
"... I didn't see a mention of historical context in the article, so I'll give some here in the comments. This form of history command substitution originated with the C Shell (csh), created by Bill Joy for the BSD flavor of UNIX back in the late 70's. It was later carried into tcsh, and bash (Bourne-Again SHell). ..."
The The '!'
symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history
with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though
I have not checked but a major of these won't run in other shell. Here we go into the amazing and mysterious uses of '!'
symbol or operator in Linux commands.
4. How to handle two or more arguments using (!)
Let's say I created a text file 1.txt on the Desktop.
$ touch /home/avi/Desktop/1.txt
and then copy it to " /home/avi/Downloads " using complete path on either side with cp command.
$ cp /home/avi/Desktop/1.txt /home/avi/downloads
Now we have passed two arguments with cp command. First is " /home/avi/Desktop/1.txt " and second is " /home/avi/Downloads
", lets handle them differently, just execute echo [arguments] to print both arguments differently.
$ echo "1st Argument is : !^"
$ echo "2nd Argument is : !cp:2"
Note 1st argument can be printed as "!^" and rest of the arguments can be printed by executing "![Name_of_Command]:[Number_of_argument]"
.
In the above example the first command was " cp " and 2nd argument was needed to print. Hence "!cp:2" , if any
command say xyz is run with 5 arguments and you need to get 4th argument, you may use "!xyz:4" , and use it as you
like. All the arguments can be accessed by "!*" .
5. Execute last command on the basis of keywords
We can execute the last executed command on the basis of keywords. We can understand it as follows:
$ ls /home > /dev/null [Command 1]
$ ls -l /home/avi/Desktop > /dev/null [Command 2]
$ ls -la /home/avi/Downloads > /dev/null [Command 3]
$ ls -lA /usr/bin > /dev/null [Command 4]
Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of
each command to " /dev/null " as we are not going to deal with the output of the command also the console remains clean.
Now Execute last run command on the basis of keywords.
$ ! ls [Command 1]
$ ! ls -l [Command 2]
$ ! ls -la [Command 3]
$ ! ls -lA [Command 4]
Check the output and you will be astonished that you are running already executed commands just by ls keywords.
6. The power of !! Operator
You can run/alter your last run command using (!!) . It will call the last run command with alter/tweak in the current
command. Lets show you the scenario
Last day I run a one-liner script to get my private IP so I run,
Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt , so what should I do? Should
I retype the whole command again and redirect the output to a file? Well an easy solution is to use UP navigation key
and add '> ip.txt' to redirect the output to a file as.
As soon as I run script, the bash prompt returned an error with the message "bash: ifconfig: command not found"
, It was not difficult for me to guess I run this command as user where it should be run as root.
So what's the solution? It is difficult to login to root and then type the whole command again! Also ( UP Navigation Key ) in
last example didn't came to rescue here. So? We need to call "!!" without quotes, which will call the last command
for that user.
$ su -c "!!" root
Here su is switch user which is root, -c is to run the specific command as the user and the most important part
!! will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.
I make use of !! mostly in following scenarios,
1. When I run apt-get command as normal user, I usually get an error saying you don't have permission to execute.
$ apt-get upgrade && apt-get dist-upgrade
Opps error"don't worry execute below command to get it successful..
$ su -c !!
Same way I do for,
$ service apache2 start
or
$ /etc/init.d/apache2 start
or
$ systemctl start apache2
OOPS User not authorized to carry such task, so I run..
$ su -c 'service apache2 start'
or
$ su -c '/etc/init.d/apache2 start'
or
$ su -c 'systemctl start apache2'
7. Run a command that affects all the file except ![FILE_NAME]
The ! ( Logical NOT ) can be used to run the command on all the files/extension except that is behind '!'
.
A. Remove all the files from a directory except the one the name of which is 2.txt .
$ rm !(2.txt)
B. Remove all the file type from the folder except the one the extension of which is " pdf ".
I didn't see a mention of historical context in the article, so I'll give some here in the comments. This form of history command
substitution originated with the C Shell (csh), created by Bill Joy for the BSD flavor of UNIX back in the late 70's. It was later
carried into tcsh, and bash (Bourne-Again SHell).
Personally, I've always preferred the C-shell history substitution mechanism, and never really took to the fc command (that
I first encountered in the Korne shell).
4th command. You can access it much simpler. There are actually regular expressions:
^ -- is at the begging expression
$ -- is at the end expression
:number -- any number parameter
Examples:
touch a.txt b.txt c.txt
echo !^ ""> display first parameter
echo !:1 ""> also display first parameter
echo !:2 ""> display second parameter
echo !:3 ""> display third parameter
echo !$ ""> display last (in our case 3th) parameter
echo !* ""> display all parameters
I think (5) works differently than you pointed out, and redirection to devnull hides it, but ZSh still prints the command.
When you invoke "! ls"", it always picks the last ls command you executed, just appends your switches at the end (after /dev/null).
One extra cool thing is the !# operator, which picks arguments from current line. Particularly good if you need to retype long
path names you already typed in current line. Just say, for example
cp /some/long/path/to/file.abc !#:1
And press tab. It's going to replace last argument with entire path and file name.
For your first part of feedback: It doesn't pick the last command executed and just to prove this we have used 4 different
switches for same command. ($ ! ls $ ! ls -l $ ! ls -la $ ! ls -lA ). Now you may check it by entering the keywords in any
order and in each case it will output the same result.
As far as it is not working in ZSH as expected, i have already mentioned that it i have tested it on BASH and most of these
won't work in other shell.
The majority of Linux
distributions have adopted systemd, and with it comes the systemd-timesyncd daemon. That
means you have an NTP client already preinstalled, and there is no need to run the full-fledged
ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job
just fine.
To check the current status of time and time configuration via timedatectl and timesyncd,
run the following command.
timedatectl status
Local time: Thu 2021-05-13 15:44:11 UTC
Universal time: Thu 2021-05-13 15:44:11 UTC
RTC time: Thu 2021-05-13 15:44:10
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
If you see NTP service: active in the output, then your computer clock is
automatically periodically adjusted through NTP.
If you see NTP service: inactive , run the following command to enable NTP time
synchronization.
timedatectl set-ntp true
That's all you have to do. Once that's done, everything should be in place and time should
be kept correctly.
In addition, timesyncd itself is still a normal service, so you can check its status also
more in detail via.
systemctl status systemd-timesyncd
systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-05-13 18:55:18 EEST; 3min 23s ago
...
If it is disabled, you can start and make systemd-timesyncd service active like this:
Before changing your time zone, start by using timedatectl to find out the
currently set time zone.
timedatectl
Local time: Thu 2021-05-13 16:59:32 UTC
Universal time: Thu 2021-05-13 16:59:32 UTC
RTC time: Thu 2021-05-13 16:59:31
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: inactive
RTC in local TZ: no
Now let's list all the available time zones, so you know the exact name of the time zone
you'll use on your system.
timedatectl list-timezones
The list of time zones is quite large. You do need to know the official time-zone name for
your location. Say you want to change the time zone to New York.
timedatectl set-timezone America/New_York
This command creates a symbolic link for the time zone you choose from
/usr/share/zoneinfo/ to /etc/localtime .
In addition, you can skip the command shown above, create this symbolic link manually and
achieve the same result.
Cloud pricing costs can be quite obscure especially for users who have not spent significant
time understanding the cost that each cloud service attracts.
Pricing models from major Cloud providers
such as AWS and Microsoft Azure are not as straightforward as compared to on-premise costs. You
simply won't get a clear mapping of exactly what you will pay for the infrastructure.
Posted by msmash on Monday May 17, 2021 @12:02PM from the how-about-that dept. Microsoft
is launching the personal version of Microsoft Teams today. After previewing the service nearly
a year ago, Microsoft Teams is now
available for free personal use amongst friends and families . From a report:
The service itself is almost identical to the Microsoft Teams that businesses use, and
it will allow people to chat, video call, and share calendars, locations, and files easily.
Microsoft is also continuing to offer everyone free 24-hour video calls that it introduced in
the preview version in November.
You'll be able to meet up with up to 300 people in video calls that can last for 24
hours. Microsoft will eventually enforce limits of 60 minutes for group calls of up to 100
people after the pandemic, but keep 24 hours for 1:1 calls.
While the preview initially launched on iOS and Android, Microsoft Teams for personal
use now works across the web, mobile, and desktop apps. Microsoft is also allowing Teams
personal users to enable its Together mode -- a feature that uses AI to segment your face and
shoulders and place you together with other people in a virtual space. Skype got this same
feature back in December.
If you have to delete the fourth line from the file then you have to substitute
N=4
.
$ sed '4d' testfile.txt
How to Delete First and Last Line from a File
You can delete the first line from a file using the same syntax as described in the previous example. You have to put
N=1
which
will remove the first line.
$ sed '1d' testfile.txt
To delete the last line from a file using the below command with
($)
sign
that denotes the last line of a file.
$ sed '$d' testfile.txt
How to Delete Range of Lines from a File
You can delete a range of lines from a file. Let's say you want to delete lines from 3 to 5, you can use the below syntax.
M
starting line number
N
Ending line number
$ sed 'M,Nd' testfile.txt
To actually delete, use the following command to do it.
$ sed '3,5d' testfile.txt
You can use
!
symbol
to negate the delete operation. This will delete all lines except the given range(3-5).
$ sed '3,5!d' testfile.txt
How to Blank Lines from a File
To delete all blank lines from a file run the following command. An important point to note is using this command, empty lines with
spaces will not be deleted. I have added empty lines and empty lines with spaces in my test file.
$ cat testfile.txt
First line
second line
Third line
Fourth line
Fifth line
Sixth line
SIXTH LINE
$ sed '/^$/d' testfile.txt
From the above image, you can see empty lines are deleted but lines that have spaces are not deleted. To delete all lines including
spaces you can run the following command.
$ sed '/^[[:space:]]*$/d' testfile.txt
How to Delete Lines Starting with Words in a File
To delete a line that starts with a certain word run the following command with
^
symbol
represents the start of the word followed by the actual word.
$ sed '/^First/d' testfile.txt
To delete a line that ends with a certain word run the following command. The word to be deleted followed by the
$
symbol
will delete lines.
$ sed '/LINE$/d' testfile.txt
How to Make Changes Directly into a File
To make the changes directly in the file using
sed
you
have to pass
-i
flag
which will make the changes directly in the file.
$ sed -i '/^[[:space:]]*$/d' testfile.txt
We have come to the end of the article. The
sed
command
will play a major part when you are working on manipulating any files. When combined with other Linux utilities like
awk
,
grep
you
can do more things with
sed
.
Gangs have been operating by registering accounts on selected platforms, signing up for
a free tier, and running a cryptocurrency mining app on the provider's free tier
infrastructure.
After trial periods or free credits reach their limits, the groups register a new
account and start from the first step, keeping the provider's servers at their upper usage
limit and slowing down their normal operations...
The list of services that have been abused this way includes the likes of GitHub,
GitLab, Microsoft Azure, TravisCI, LayerCI, CircleCI, Render, CloudBees CodeShip, Sourcehut,
and Okteto.
GitLab and
Sourcehut have
published blog posts detailing their efforts to curtail the problem, with Sourcehut
complaining cryptocurrency miners are "deliberately circumventing our abuse detection," which
"exhausts our resources and leads to long build queues for normal users."
In the article an engineer at CodeShip acknowledges "Our team has been swamped with
dealing with this kind of stuff."
You can achieve the same result by replacing the backticks with the $ parens, like in the example below:
⯠echo "There are $(ls | wc -l) files in this directory"
There are 3 files in this directory
Here's another example, still very simple but a little more realistic. I need to troubleshoot something in my network connections,
so I decide to show my total and waiting connections minute by minute.
It doesn't seem like a huge difference, right? I just had to adjust the syntax. Well, there are some implications involving
the two approaches. If you are like me, who automatically uses the backticks without even blinking, keep reading.
Deprecation and recommendations
Deprecation sounds like a bad word, and in many cases, it might really be bad.
When I was researching the explanations for the backtick operator, I found some discussions about "are the backtick operators
deprecated?"
The short answer is: Not in the sense of "on the verge of becoming unsupported and stop working." However, backticks should be
avoided and replaced by the $ parens syntax.
The main reasons for that are (in no particular order):
1. Backticks operators can become messy if the internal commands also use backticks.
You will need to escape the internal backticks, and if you have single quotes as part of the commands or part of the results,
reading and troubleshooting the script can become difficult.
If you start thinking about nesting backtick operators inside other backtick operators, things will not work as expected
or not work at all. Don't bother.
2. The $ parens operator is safer and more predictable.
What you code inside the $ parens operator is treated as a shell script. Syntactically it is the same thing as
having that code in a text file, so you can expect that everything you would code in an isolated shell script would work here.
Here are some examples of the behavioral differences between backticks and $ parens:
If you compare the two approaches, it seems logical to think that you should always/only use the $ parens approach.
And you might think that the backtick operators are only used by
sysadmins from an older era .
Well, that might be true, as sometimes I use things that I learned long ago, and in simple situations, my "muscle memory" just
codes it for me. For those ad-hoc commands that you know that do not contain any nasty characters, you might be OK using backticks.
But for anything that is more perennial or more complex/sophisticated, please go with the $ parens approach.
7. Sort the contents of file ' lsl.txt ' on the basis of 2nd column (which represents number
of symbolic links).
$ sort -nk2 lsl.txt
Note: The ' -n ' option in the above example sort the contents numerically. Option ' -n '
must be used when we wanted to sort a file on the basis of a column which contains numerical
values.
8. Sort the contents of file ' lsl.txt ' on the basis of 9th column (which is the name of
the files and folders and is non-numeric).
$ sort -k9 lsl.txt
9. It is not always essential to run sort command on a file. We can pipeline it directly on
the terminal with actual command.
$ ls -l /home/$USER | sort -nk5
10. Sort and remove duplicates from the text file tecmint.txt . Check if the duplicate has
been removed or not.
$ cat tecmint.txt
$ sort -u tecmint.txt
Rules so far (what we have observed):
Lines starting with numbers are preferred in the list and lies at the top until otherwise
specified ( -r ).
Lines starting with lowercase letters are preferred in the list and lies at the top until
otherwise specified ( -r ).
Contents are listed on the basis of occurrence of alphabets in dictionary until otherwise
specified ( -r ).
Sort command by default treat each line as string and then sort it depending upon
dictionary occurrence of alphabets (Numeric preferred; see rule – 1) until otherwise
specified.
11. Create a third file ' lsla.txt ' at the current location and populate it with the output
of ' ls -lA ' command.
$ ls -lA /home/$USER > /home/$USER/Desktop/tecmint/lsla.txt
$ cat lsla.txt
Those having understanding of ' ls ' command knows that ' ls -lA'='ls -l ' + Hidden files.
So most of the contents on these two files would be same.
12. Sort the contents of two files on standard output in one go.
$ sort lsl.txt lsla.txt
Notice the repetition of files and folders.
13. Now we can see how to sort, merge and remove duplicates from these two files.
$ sort -u lsl.txt lsla.txt
Notice that duplicates has been omitted from the output. Also, you can write the output to a
new file by redirecting the output to a file.
14. We may also sort the contents of a file or the output based upon more than one column.
Sort the output of ' ls -l ' command on the basis of field 2,5 (Numeric) and 9
(Non-Numeric).
$ ls -l /home/$USER | sort -t "," -nk2,5 -k9
That's all for now. In the next article we will cover a few more examples of ' sort '
command in detail for you. Till then stay tuned and connected to Tecmint. Keep sharing. Keep
commenting. Like and share us and help us get spread.
The ability for a Bash script to handle command line options such as -h to
display help gives you some powerful capabilities to direct the program and modify what it
does. In the case of your -h option, you want the program to print the help text
to the terminal session and then quit without running the rest of the program. The ability to
process options entered at the command line can be added to the Bash script using the
while command in conjunction with the getops and case
commands.
The getops command reads any and all options specified at the command line and
creates a list of those options. The while command loops through the list of
options by setting the variable $options for each in the code below. The case
statement is used to evaluate each option in turn and execute the statements in the
corresponding stanza. The while statement will continue to assess the list of
options until they have all been processed or an exit statement is encountered, which
terminates the program.
Be sure to delete the help function call just before the echo "Hello world!" statement so
that the main body of the program now looks like this.
############################################################
############################################################
# Main program #
############################################################
############################################################
############################################################
# Process the input options. Add options as needed. #
############################################################
# Get the options
while getopts ":h" option; do
case $option in
h) # display Help
Help
exit;;
esac
done
echo "Hello world!"
Notice the double semicolon at the end of the exit statement in the case option for
-h . This is required for each option. Add to this case statement to delineate the
end of each option.
Testing is now a little more complex. You need to test your program with several different
options -- and no options -- to see how it responds. First, check to ensure that with no
options that it prints "Hello world!" as it should.
[student@testvm1 ~]$ hello.sh
Hello world!
That works, so now test the logic that displays the help text.
[student@testvm1 ~]$ hello.sh -h
Add a description of the script functions here.
Syntax: scriptTemplate [-g|h|t|v|V]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.
That works as expected, so now try some testing to see what happens when you enter some
unexpected options.
[student@testvm1 ~]$ hello.sh -x
Hello world!
[student@testvm1 ~]$ hello.sh -q
Hello world!
[student@testvm1 ~]$ hello.sh -lkjsahdf
Add a description of the script functions here.
Syntax: scriptTemplate [-g|h|t|v|V]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.
[student@testvm1 ~]$
Handling invalid options
The program just ignores the options for which you haven't created specific responses
without generating any errors. Although in the last entry with the -lkjsahdf
options, because there is an "h" in the list, the program did recognize it and print the help
text. Testing has shown that one thing that is missing is the ability to handle incorrect input
and terminate the program if any is detected.
You can add another case stanza to the case statement that will match any option for which
there is no explicit match. This general case will match anything you haven't provided a
specific match for. The case statement now looks like this.
while getopts ":h" option; do
case $option in
h) # display Help
Help
exit;;
\?) # Invalid option
echo "Error: Invalid option"
exit;;
esac
done
This bit of code deserves an explanation about how it works. It seems complex but is fairly
easy to understand. The while – done structure defines a loop that executes once for each
option in the getopts – option structure. The ":h" string -- which requires the quotes --
lists the possible input options that will be evaluated by the case – esac structure.
Each option listed must have a corresponding stanza in the case statement. In this case, there
are two. One is the h) stanza which calls the Help procedure. After the Help procedure
completes, execution returns to the next program statement, exit;; which exits from the program
without executing any more code even if some exists. The option processing loop is also
terminated, so no additional options would be checked.
Notice the catch-all match of \? as the last stanza in the case statement. If any options
are entered that are not recognized, this stanza prints a short error message and exits from
the program.
Any additional specific cases must precede the final catch-all. I like to place the case
stanzas in alphabetical order, but there will be circumstances where you want to ensure that a
particular case is processed before certain other ones. The case statement is sequence
sensitive, so be aware of that when you construct yours.
The last statement of each stanza in the case construct must end with the double semicolon (
;; ), which is used to mark the end of each stanza explicitly. This allows those
programmers who like to use explicit semicolons for the end of each statement instead of
implicit ones to continue to do so for each statement within each case stanza.
Test the program again using the same options as before and see how this works now.
The Bash script now looks like this.
#!/bin/bash
############################################################
# Help #
############################################################
Help()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}
############################################################
############################################################
# Main program #
############################################################
############################################################
############################################################
# Process the input options. Add options as needed. #
############################################################
# Get the options
while getopts ":h" option; do
case $option in
h) # display Help
Help
exit;;
\?) # Invalid option
echo "Error: Invalid option"
exit;;
esac
done
echo "hello world!"
Be sure to test this version of your program very thoroughly. Use random input and see what
happens. You should also try testing valid and invalid options without using the dash (
- ) in front.
Using options to enter data
First, add a variable and initialize it. Add the two lines shown in bold in the segment of
the program shown below. This initializes the $Name variable to "world" as the default.
<snip>
############################################################
############################################################
# Main program #
############################################################
############################################################
# Set variables
Name="world"
############################################################
# Process the input options. Add options as needed. #
<snip>
Change the last line of the program, the echo command, to this.
echo "hello $Name!"
Add the logic to input a name in a moment but first test the program again. The result
should be exactly the same as before.
# Get the options
while getopts ":hn:" option; do
case $option in
h) # display Help
Help
exit;;
n) # Enter a name
Name=$OPTARG;;
\?) # Invalid option
echo "Error: Invalid option"
exit;;
esac
done
$OPTARG is always the variable name used for each new option argument, no matter how many
there are. You must assign the value in $OPTARG to a variable name that will be used in the
rest of the program. This new stanza does not have an exit statement. This changes the program
flow so that after processing all valid options in the case statement, execution moves on to
the next statement after the case construct.
#!/bin/bash
############################################################
# Help #
############################################################
Help()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}
############################################################
############################################################
# Main program #
############################################################
############################################################
# Set variables
Name="world"
############################################################
# Process the input options. Add options as needed. #
############################################################
# Get the options
while getopts ":hn:" option; do
case $option in
h) # display Help
Help
exit;;
n) # Enter a name
Name=$OPTARG;;
\?) # Invalid option
echo "Error: Invalid option"
exit;;
esac
done
echo "hello $Name!"
Be sure to test the help facility and how the program reacts to invalid input to verify that
its ability to process those has not been compromised. If that all works as it should, then you
have successfully learned how to use options and option arguments.
The Bash String Operators Posted on December 11, 2014 | 3 minutes | Kevin Sookocheff
A common task in bash programming is to manipulate portions of a string and return the result. bash provides rich
support for these manipulations via string operators. The syntax is not always intuitive so I wanted to use this blog post to serve
as a permanent reminder of the operators.
The string operators are signified with the ${} notation. The operations can be grouped in to a few classes. Each
heading in this article describes a class of operation.
Substring ExtractionExtract from a position
1
${string:position}
Extraction returns a substring of string starting at position and ending at the end of string
. string is treated as an array of characters starting at 0.
1
2
3
4
5
> string="hello world"
> echo ${string:1}
ello world
> echo ${string:6}
world
Extract from a position with a length
${string:position:length}
Adding a length returns a substring only as long as the length parameter.
Substring ReplacementReplace first occurrence of word
${variable/pattern/string}
Find the first occurrence of pattern in variable and replace it with string . If
string is null, pattern is deleted from variable . If pattern starts with #
, the match must occur at the beginning of variable . If pattern starts with % , the match
must occur at the end of the variable .
Tilde is a text editor for the console/terminal, which provides an intuitive interface for
people accustomed to GUI environments such as Gnome, KDE and Windows. For example, the
short-cut to copy the current selection is Control-C, and to paste the previously copied text
the short-cut Control-V can be used. As another example, the File menu can be accessed by
pressing Meta-F.
However, being a terminal-based program there are limitations. Not all terminals provide
sufficient information to the client programs to make Tilde behave in the most intuitive way.
When this is the case, Tilde provides work-arounds which should be easy to work with.
The main audience for Tilde is users who normally work in GUI environments, but sometimes
require an editor for a console/terminal environment. This may be because the computer in
question is a server which does not provide a GUI, or is accessed remotely over SSH. Tilde
allows these users to edit files without having to learn a completely new interface, such as vi
or Emacs do. A result of this choice is that Tilde will not provide all the fancy features that
Vim or Emacs provide, but only the most used features.
NewsTilde version
1.1.2 released
This release fixes a bug where Tilde would discard read lines before an invalid character,
while requested to continue reading.
23-May-2020
Tilde version 1.1.1 released
This release fixes a build failure on C++14 and later compilers
When you need to split a string in bash, you can use bash's built-in read
command. This command reads a single line of string from stdin, and splits the string on a
delimiter. The split elements are then stored in either an array or separate variables supplied
with the read command. The default delimiter is whitespace characters (' ', '\t',
'\r', '\n'). If you want to split a string on a custom delimiter, you can specify the delimiter
in IFS variable before calling read .
# strings to split
var1="Harry Samantha Bart Amy"
var2="green:orange:black:purple"
# split a string by one or more whitespaces, and store the result in an array
read -a my_array <<< $var1
# iterate the array to access individual split words
for elem in "${my_array[@]}"; do
echo $elem
done
echo "----------"
# split a string by a custom delimter
IFS=':' read -a my_array2 <<< $var2
for elem in "${my_array2[@]}"; do
echo $elem
done
Harry
Samantha
Bart
Amy
----------
green
orange
black
purple
Remove a Trailing Newline Character from a String in Bash
If you want to remove a trailing newline or carriage return character from a string, you can
use the bash's parameter expansion in the following form.
${string%$var}
This expression implies that if the "string" contains a trailing character stored in "var",
the result of the expression will become the "string" without the character. For example:
# input string with a trailing newline character
input_line=$'This is my example line\n'
# define a trailing character. For carriage return, replace it with $'\r'
character=$'\n'
echo -e "($input_line)"
# remove a trailing newline character
input_line=${input_line%$character}
echo -e "($input_line)"
(This is my example line
)
(This is my example line)
Trim Leading/Trailing Whitespaces from a String in Bash
If you want to remove whitespaces at the beginning or at the end of a string (also known as
leading/trailing whitespaces) from a string, you can use sed command.
my_str=" This is my example string "
# original string with leading/trailing whitespaces
echo -e "($my_str)"
# trim leading whitespaces in a string
my_str=$(echo "$my_str" | sed -e "s/^[[:space:]]*//")
echo -e "($my_str)"
# trim trailing whitespaces in a string
my_str=$(echo "$my_str" | sed -e "s/[[:space:]]*$//")
echo -e "($my_str)"
( This is my example string )
(This is my example string ) ← leading whitespaces removed
(This is my example string) ← trailing whitespaces removed
If you want to stick with bash's built-in mechanisms, the following bash function can get
the job done.
trim() {
local var="$*"
# remove leading whitespace characters
var="${var#"${var%%[![:space:]]*}"}"
# remove trailing whitespace characters
var="${var%"${var##*[![:space:]]}"}"
echo "$var"
}
my_str=" This is my example string "
echo "($my_str)"
my_str=$(trim $my_str)
echo "($my_str)"
If varname exists and isn't null, return its value; otherwise return
word .
Purpose :
Returning a default value if the variable is undefined.
Example :
${count:-0} evaluates to 0 if count is undefined.
$ { varname := word }
If varname exists and isn't null, return its value; otherwise set it to
word and then return its value. Positional and special parameters cannot be
assigned this way.
Purpose :
Setting a variable to a default value if it is undefined.
Example :
$ {count := 0} sets count to 0 if it is undefined.
$ { varname :? message }
If varname exists and isn't null, return its value; otherwise print
varname : followed by message , and abort the current command or script
(non-interactive shells only). Omitting message produces the default message
parameter null or not set .
Purpose :
Catching errors that result from variables being undefined.
Example :
{count :?" undefined! " } prints "count: undefined!" and exits if count is
undefined.
$ { varname : + word }
If varname exists and isn't null, return word ; otherwise return
null.
Purpose :
Testing for the existence of a variable.
Example :
$ {count :+ 1} returns 1 (which could mean "true") if count is defined.
$ { varname : offset }
$ { varname : offset : length }
Performs substring expansion. a It returns the substring of $
varname starting at offset and up to length characters. The first
character in $ varname is position 0. If length is omitted, the substring
starts at offset and continues to the end of $ varname . If offset
is less than 0 then the position is taken from the end of $ varname . If
varname is @ , the length is the number of positional parameters starting
at parameter offset .
Purpose :
Returning parts of a string (substrings or slices ).
Example :
If count is set to frogfootman , $ {count :4} returns footman . $
{count :4:4} returns foot .
If the pattern matches the beginning of the variable's value, delete the shortest
part that matches and return the rest.
$ { variable ## pattern }
If the pattern matches the beginning of the variable's value, delete the longest
part that matches and return the rest.
$ { variable % pattern }
If the pattern matches the end of the variable's value, delete the shortest part
that matches and return the rest.
$ { variable %% pattern }
If the pattern matches the end of the variable's value, delete the longest part that
matches and return the rest.
$ { variable / pattern / string }
$ { variable // pattern / string }
The longest match to pattern in variable is replaced by string
. In the first form, only the first match is replaced. In the second form, all matches
are replaced. If the pattern is begins with a # , it must match at the start of the
variable. If it begins with a % , it must match with the end of the variable. If
string is null, the matches are deleted. If variable is @ or * , the
operation is applied to each positional parameter in turn and the expansion is the
resultant list. a
The curly-bracket syntax allows for the shell's string operators . String operators
allow you to manipulate values of variables in various useful ways without having to write
full-blown programs or resort to external UNIX utilities. You can do a lot with string-handling
operators even if you haven't yet mastered the programming features we'll see in later
chapters.
In particular, string operators let you do the following:
Ensure that variables exist (i.e., are defined and have non-null values)
Set default values for variables
Catch errors that result from variables not being set
Remove portions of variables' values that match patterns
The basic idea behind the syntax of string operators is that special characters that denote
operations are inserted between the variable's name and the right curly brackets. Any argument
that the operator may need is inserted to the operator's right.
The first group of string-handling operators tests for the existence of variables and allows
substitutions of default values under certain conditions. These are listed in Table
4.1 . [6]
[6] The colon ( : ) in each of these operators is actually optional. If the
colon is omitted, then change "exists and isn't null" to "exists" in each definition, i.e.,
the operator tests for existence only.
If varname exists and isn't null, return its value; otherwise return word
.
Purpose :
Returning a default value if the variable is undefined.
Example :
${count:-0} evaluates to 0 if count is undefined.
${varname:=word}
If varname exists and isn't null, return its value; otherwise set it to
word and then return its value.[7]
Purpose :
Setting a variable to a default value if it is undefined.
Example :
${count:=0} sets count to 0 if it is undefined.
${varname:?message}
If varname exists and isn't null, return its value; otherwise print
varname: followed by message , and abort the current command or
script. Omitting message produces the default message parameter null or not
set .
Purpose :
Catching errors that result from variables being undefined.
Example :
{count:?"undefined!"} prints
"count: undefined!" and exits if count is undefined.
${varname:+word}
If varname exists and isn't null, return word ; otherwise return
null.
Purpose :
Testing for the existence of a variable.
Example :
${count:+1} returns 1 (which could mean "true") if count is defined.
[7] Pascal, Modula, and Ada programmers may find it helpful to recognize the similarity of
this to the assignment operators in those languages.
The first two of these operators are ideal for setting defaults for command-line arguments
in case the user omits them. We'll use the first one in our first programming task.
Task
4.1
You have a large album collection, and you want to write some software to keep track of
it. Assume that you have a file of data on how many albums you have by each artist. Lines in
the file look like this:
14 Bach, J.S.
1 Balachander, S.
21 Beatles
6 Blakey, Art
Write a program that prints the N highest lines, i.e., the N artists by whom
you have the most albums. The default for N should be 10. The program should take one
argument for the name of the input file and an optional second argument for how many lines to
print.
By far the best approach to this type of script is to use built-in UNIX utilities, combining
them with I/O redirectors and pipes. This is the classic "building-block" philosophy of UNIX
that is another reason for its great popularity with programmers. The building-block technique
lets us write a first version of the script that is only one line long:
sort -nr $1 | head -${2:-10}
Here is how this works: the sort (1) program sorts the data in the file whose name is
given as the first argument ( $1 ). The -n option tells sort to interpret
the first word on each line as a number (instead of as a character string); the -r tells
it to reverse the comparisons, so as to sort in descending order.
The output of sort is piped into the head (1) utility, which, when given the
argument -N , prints the first N lines of its input on the standard
output. The expression -${2:-10} evaluates to a dash ( - ) followed by the second
argument if it is given, or to -10 if it's not; notice that the variable in this expression is
2 , which is the second positional parameter.
Assume the script we want to write is called highest . Then if the user types
highest myfile , the line that actually runs is:
sort -nr myfile | head -10
Or if the user types highest myfile 22 , the line that runs is:
sort -nr myfile | head -22
Make sure you understand how the :- string operator provides a default value.
This is a perfectly good, runnable script-but it has a few problems. First, its one line is
a bit cryptic. While this isn't much of a problem for such a tiny script, it's not wise to
write long, elaborate scripts in this manner. A few minor changes will make the code more
readable.
First, we can add comments to the code; anything between # and the end of a line is a
comment. At a minimum, the script should start with a few comment lines that indicate what the
script does and what arguments it accepts. Second, we can improve the variable names by
assigning the values of the positional parameters to regular variables with mnemonic names.
Finally, we can add blank lines to space things out; blank lines, like comments, are ignored.
Here is a more readable version:
#
# highest filename [howmany]
#
# Print howmany highest-numbered lines in file filename.
# The input file is assumed to have lines that start with
# numbers. Default for howmany is 10.
#
filename=$1
howmany=${2:-10}
sort -nr $filename | head -$howmany
The square brackets around howmany in the comments adhere to the convention in UNIX
documentation that square brackets denote optional arguments.
The changes we just made improve the code's readability but not how it runs. What if the
user were to invoke the script without any arguments? Remember that positional parameters
default to null if they aren't defined. If there are no arguments, then $1 and $2
are both null. The variable howmany ( $2 ) is set up to default to 10, but there
is no default for filename ( $1 ). The result would be that this command
runs:
sort -nr | head -10
As it happens, if sort is called without a filename argument, it expects input to
come from standard input, e.g., a pipe (|) or a user's terminal. Since it doesn't have the
pipe, it will expect the terminal. This means that the script will appear to hang! Although you
could always type [CTRL-D] or [CTRL-C] to get out of the script, a naive
user might not know this.
Therefore we need to make sure that the user supplies at least one argument. There are a few
ways of doing this; one of them involves another string operator. We'll replace the line:
filename=$1
with:
filename=${1:?"filename missing."}
This will cause two things to happen if a user invokes the script without any arguments:
first the shell will print the somewhat unfortunate message:
highest: 1: filename missing.
to the standard error output. Second, the script will exit without running the remaining
code.
With a somewhat "kludgy" modification, we can get a slightly better error message. Consider
this code:
filename=$1
filename=${filename:?"missing."}
This results in the message:
highest: filename: missing.
(Make sure you understand why.) Of course, there are ways of printing whatever message is
desired; we'll find out how in Chapter 5 .
Before we move on, we'll look more closely at the two remaining operators in Table
4.1 and see how we can incorporate them into our task solution. The := operator does
roughly the same thing as :- , except that it has the "side effect" of setting the value
of the variable to the given word if the variable doesn't exist.
Therefore we would like to use := in our script in place of :- , but we can't;
we'd be trying to set the value of a positional parameter, which is not allowed. But if we
replaced:
howmany=${2:-10}
with just:
howmany=$2
and moved the substitution down to the actual command line (as we did at the start), then we
could use the := operator:
sort -nr $filename | head -${howmany:=10}
Using := has the added benefit of setting the value of howmany to 10 in case
we need it afterwards in later versions of the script.
The final substitution operator is :+ . Here is how we can use it in our example:
Let's say we want to give the user the option of adding a header line to the script's output.
If he or she types the option -h , then the output will be preceded by the line:
ALBUMS ARTIST
Assume further that this option ends up in the variable header , i.e., $header
is -h if the option is set or null if not. (Later we will see how to do this without
disturbing the other positional parameters.)
The expression:
${header:+"ALBUMS ARTIST\n"}
yields null if the variable header is null, or ALBUMS══ARTIST\n if
it is non-null. This means that we can put the line:
print -n ${header:+"ALBUMS ARTIST\n"}
right before the command line that does the actual work. The -n option to
print causes it not to print a LINEFEED after printing its arguments. Therefore
this print statement will print nothing-not even a blank line-if header is null;
otherwise it will print the header line and a LINEFEED (\n).
We'll continue refining our solution to Task 4-1 later in this chapter. The next type of
string operator is used to match portions of a variable's string value against patterns
. Patterns, as we saw in Chapter 1 are strings that can
contain wildcard characters ( * , ? , and [] for character
sets and ranges).
Wildcards have been standard features of all UNIX shells going back (at least) to the
Version 6 Bourne shell. But the Korn shell is the first shell to add to their capabilities. It
adds a set of operators, called regular expression (or regexp for short)
operators, that give it much of the string-matching power of advanced UNIX utilities like
awk (1), egrep (1) (extended grep (1)) and the emacs editor, albeit
with a different syntax. These capabilities go beyond those that you may be used to in other
UNIX utilities like grep , sed (1) and vi (1).
Advanced UNIX users will find the Korn shell's regular expression capabilities occasionally
useful for script writing, although they border on overkill. (Part of the problem is the
inevitable syntactic clash with the shell's myriad other special characters.) Therefore we
won't go into great detail about regular expressions here. For more comprehensive information,
the "last word" on practical regular expressions in UNIX is sed & awk , an O'Reilly
Nutshell Handbook by Dale Dougherty. If you are already comfortable with awk or
egrep , you may want to skip the following introductory section and go to "Korn Shell
Versus awk/egrep Regular Expressions" below, where we explain the shell's regular expression
mechanism by comparing it with the syntax used in those two utilities. Otherwise, read
on.
Think of regular expressions as strings that match patterns more powerfully than the
standard shell wildcard schema. Regular expressions began as an idea in theoretical computer
science, but they have found their way into many nooks and crannies of everyday, practical
computing. The syntax used to represent them may vary, but the concepts are very much the
same.
A shell regular expression can contain regular characters, standard wildcard characters, and
additional operators that are more powerful than wildcards. Each such operator has the form
x ( exp) , where x is the particular operator and exp is
any regular expression (often simply a regular string). The operator determines how many
occurrences of exp a string that matches the pattern can contain. See Table 4.2 and
Table 4.3 .
Regular expressions are extremely useful when dealing with arbitrary text, as you already
know if you have used grep or the regular-expression capabilities of any UNIX editor.
They aren't nearly as useful for matching filenames and other simple types of information with
which shell users typically work. Furthermore, most things you can do with the shell's regular
expression operators can also be done (though possibly with more keystrokes and less
efficiency) by piping the output of a shell command through grep or egrep .
Nevertheless, here are a few examples of how shell regular expressions can solve
filename-listing problems. Some of these will come in handy in later chapters as pieces of
solutions to larger tasks.
The emacs editor supports customization files whose names end in .el (for
Emacs LISP) or .elc (for Emacs LISP Compiled). List all emacs customization
files in the current directory.
In a directory of C source code, list all files that are not necessary. Assume that
"necessary" files end in .c or .h , or are named Makefile or
README .
Filenames in the VAX/VMS operating system end in a semicolon followed by a version
number, e.g., fred.bob;23 . List all VAX/VMS-style filenames in the current
directory.
Here are the solutions:
In the first of these, we are looking for files that end in .el with an optional
c . The expression that matches this is * .el ? (c)
.
The second example depends on the four standard subexpressions *.c ,
*.h , Makefile , and README . The entire expression is
!(*.c|*.h|Makefile|README) , which matches
anything that does not match any of the four possibilities.
The solution to the third example starts with *\; :
the shell wildcard * followed by a backslash-escaped semicolon. Then, we could
use the regular expression +([0-9]) , which matches one or more characters in the
range [0-9] , i.e., one or more digits. This is almost correct (and probably close
enough), but it doesn't take into account that the first digit cannot be 0. Therefore the
correct expression is *\;[1-9]*([0-9]) , which
matches anything that ends with a semicolon, a digit from 1 to 9, and zero or more
digits from 0 to 9.
Regular expression operators are an interesting addition to the Korn shell's features, but
you can get along well without them-even if you intend to do a substantial amount of shell
programming.
In our opinion, the shell's authors missed an opportunity to build into the wildcard
mechanism the ability to match files by type (regular, directory, executable, etc., as
in some of the conditional tests we will see in Chapter 5 ) as well as by name
component. We feel that shell programmers would have found this more useful than arcane regular
expression operators.
The following section compares Korn shell regular expressions to analogous features in
awk and egrep . If you aren't familiar with these, skip to the section entitled
"Pattern-matching Operators."
These equivalents are close but not quite exact. Actually, an exp within any of the
Korn shell operators can be a series of exp1 | exp2 |... alternates. But because
the shell would interpret an expression like dave|fred|bob as a pipeline of commands,
you must use @(dave|fred|bob) for alternates by themselves.
For example:
@(dave|fred|bob) matches dave , fred , or bob .
*(dave|fred|bob) means, "0 or more occurrences of dave ,
fred , or bob ". This expression matches strings like the null string,
dave , davedave , fred , bobfred , bobbobdavefredbobfred ,
etc.
+(dave|fred|bob) matches any of the above except the null string.
?(dave|fred|bob) matches the null string, dave , fred , or
bob .
!(dave|fred|bob) matches anything except dave , fred , or bob
.
It is worth re-emphasizing that shell regular expressions can still contain standard shell
wildcards. Thus, the shell wildcard ? (match any single character) is the equivalent to
. in egrep or awk , and the shell's character set operator [ ...
] is the same as in those utilities. [9] For example, the expression +([0-9])
matches a number, i.e., one or more digits. The shell wildcard character * is
equivalent to the shell regular expression * ( ?) .
[9] And, for that matter, the same as in grep , sed , ed , vi
, etc.
A few egrep and awk regexp operators do not have equivalents in the Korn
shell. These include:
The beginning- and end-of-line operators ^ and $ .
The beginning- and end-of-word operators \< and \> .
Repeat factors like \{N\} and \{M,N\} .
The first two pairs are hardly necessary, since the Korn shell doesn't normally operate on
text files and does parse strings into words itself.
If the pattern matches the beginning of the variable's value, delete the shortest part
that matches and return the rest.
$ { variable ## pattern }
If the pattern matches the beginning of the variable's value, delete the longest part
that matches and return the rest.
$ { variable % pattern }
If the pattern matches the end of the variable's value, delete the shortest part that
matches and return the rest.
$ { variable %% pattern }
If the pattern matches the end of the variable's value, delete the longest part that
matches and return the rest.
These can be hard to remember, so here's a handy mnemonic device: # matches the front
because number signs precede numbers; % matches the rear because percent signs
follow numbers.
The classic use for pattern-matching operators is in stripping off components of pathnames,
such as directory prefixes and filename suffixes. With that in mind, here is an example that
shows how all of the operators work. Assume that the variable path has the value
/home /billr/mem/long.file.name ; then:
The two patterns used here are /*/ , which matches anything between two
slashes, and .* , which matches a dot followed by anything.
We will incorporate one of these operators into our next programming task.
Task
4.2
You are writing a C compiler, and you want to use the Korn shell for your
front-end.[10]
[10] Don't laugh-many UNIX compilers have shell scripts as front-ends.
Think of a C compiler as a pipeline of data processing components. C source code is input to
the beginning of the pipeline, and object code comes out of the end; there are several steps in
between. The shell script's task, among many other things, is to control the flow of data
through the components and to designate output files.
You need to write the part of the script that takes the name of the input C source file and
creates from it the name of the output object code file. That is, you must take a filename
ending in .c and create a filename that is similar except that it ends in .o
.
The task at hand is to strip the .c off the filename and append .o . A single
shell statement will do it:
objname=${filename%.c}.o
This tells the shell to look at the end of filename for .c . If there is a
match, return $filename with the match deleted. So if filename had the value
fred.c , the expression ${filename%.c} would return fred . The .o
is appended to make the desired fred.o , which is stored in the variable objname
.
If filename had an inappropriate value (without .c ) such as fred.a ,
the above expression would evaluate to fred.a.o : since there was no match, nothing is
deleted from the value of filename , and .o is appended anyway. And, if
filename contained more than one dot-e.g., if it were the y.tab.c that is so
infamous among compiler writers-the expression would still produce the desired y.tab.o .
Notice that this would not be true if we used %% in the expression instead of % .
The former operator uses the longest match instead of the shortest, so it would match
.tab.o and evaluate to y.o rather than y.tab.o . So the single % is
correct in this case.
A longest-match deletion would be preferable, however, in the following task.
Task
4.3
You are implementing a filter that prepares a text file for printer output. You want to
put the file's name-without any directory prefix-on the "banner" page. Assume that, in your
script, you have the pathname of the file to be printed stored in the variable
pathname .
Clearly the objective is to remove the directory prefix from the pathname. The following
line will do it:
bannername=${pathname##*/}
This solution is similar to the first line in the examples shown before. If pathname
were just a filename, the pattern */ (anything followed by a slash) would
not match and the value of the expression would be pathname untouched. If
pathname were something like fred/bob , the prefix fred/ would match the
pattern and be deleted, leaving just bob as the expression's value. The same thing would
happen if pathname were something like /dave/pete/fred/bob : since the ##
deletes the longest match, it deletes the entire /dave/pete/fred/ .
If we used #*/ instead of ##*/ ,
the expression would have the incorrect value dave/pete/fred/bob , because the shortest
instance of "anything followed by a slash" at the beginning of the string is just a slash (
/ ).
The construct $ { variable##*/} is actually
equivalent to the UNIX utility basename (1). basename takes a pathname as
argument and returns the filename only; it is meant to be used with the shell's command
substitution mechanism (see below). basename is less efficient than $ {
variable##/*} because it runs in its own separate process
rather than within the shell. Another utility, dirname (1), does essentially the
opposite of basename : it returns the directory prefix only. It is equivalent to the
Korn shell expression $ { variable%/*} and is less
efficient for the same reason.
There are two remaining operators on variables. One is $ {# varname }, which
returns the length of the value of the variable as a character string. (In Chapter 6 we will see how to
treat this and similar values as actual numbers so they can be used in arithmetic expressions.)
For example, if filename has the value fred.c , then ${#filename} would
have the value 6 . The other operator ( $ {# array[*]} ) has to do with array variables, which are also discussed in Chapter 6 .
IBM is notorious for destroying useful information . This article is no longer available from IBM.
Jul 20, 2008
Originally from: |IBM DeveloperWorks
How to be a more productive Linux systems administrator
Learn these 10 tricks and you'll be the most powerful Linux systems administrator
in the universe...well, maybe not the universe, but you will need these tips
to play in the big leagues. Learn about SSH tunnels, VNC, password recovery,
console spying, and more. Examples accompany each trick, so you can duplicate
them on your own systems.
The best systems administrators are set apart by their efficiency. And if an
efficient systems administrator can do a task in 10 minutes that would take another
mortal two hours to complete, then the efficient systems administrator should be
rewarded (paid more) because the company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won't attempt to
cover that trick in this article, I will give you 10 essential gems from
the lazy admin's bag of tricks. These tips will save you time-and even if you don't
get paid more money to be more efficient, you'll at least have more time to play
Halo.
The newbie states that when he pushes the Eject button on the DVD drive of a
server running a certain Redmond-based operating system, it will eject immediately.
He then complains that, in most enterprise Linux servers, if a process is running
in that directory, then the ejection won't happen. For too long as a Linux administrator,
I would reboot the machine and get my disk on the bounce if I couldn't figure out
what was running and why it wouldn't release the DVD drive. But this is ineffective.
Here's how you find the process that holds your DVD drive and eject it to your
heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal,
and mount the DVD drive:
# mount /media/cdrom # cd /media/cdrom # while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done
Now open up a second terminal and try to eject the DVD drive:
# eject
You'll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let's find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject
the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
Behold! Your terminal looks like garbage. Everything you type looks like you're
looking into the Matrix. What do you do?
You type reset. But wait you say, typing reset is too
close to typing reboot or shutdown. Your palms start to
sweat-especially if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted.
Go ahead, do it:
# reset
Now your screen is back to normal. This is much better than closing the window
and then logging in again, especially if you just went through five machines to
SSH to this machine.
David, the high-maintenance user from product engineering, calls: "I need you
to help me understand why I can't compile supercode.c on these new machines you
deployed."
"Fine," you say. "What machine are you on?"
David responds: " Posh." (Yes, this fictional company has named its five production
servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root
powers and on another machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
"Hey David, run the following command on your terminal: # screen -x foo."
This will cause your and David's sessions to be joined together in the holy Linux
shell. You can type or he can type, but you'll both see what the other is doing.
This saves you from walking to the other floor and lets you both have equal control.
The benefit is that David can watch your troubleshooting skills and see exactly
how you solve problems.
At last you both see what the problem is: David's compile script hard-coded an
old directory that does not exist on this new server. You mount it, recompile, solve
the problem, and David goes back to work. You then go back to whatever lazy activity
you were doing before.
The one caveat to this trick is that you both need to be logged in as the same
user. Other cool things you can do with the screen command include
having multiple windows and split screens. Read the man pages for more on that.
But I'll give you one last tip while you're in your screen session.
To detach from it and leave it open, type: Ctrl-A D . (I mean, hold
down the Ctrl key and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo command again.
You forgot your root password. Nice work. Now you'll just have to reinstall the
entire machine. Sadly enough, I've seen more than a few people do this. But it's
surprisingly easy to get on the machine and change the password. This doesn't work
in all cases (like if you made a GRUB password and forgot that too), but here's
how you do it in a normal case with a Cent OS Linux example.
First reboot the system. When it reboots you'll come to the GRUB screen as shown
in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding
all the way to a normal boot.
Use the arrow key again to highlight the line that begins with
kernel,
and press E to edit the kernel parameters. When you get to the screen shown
in Figure 3, simply append the number 1 to the arguments as shown in
Figure 3:
Many times I'll be at a site where I need remote support from someone who is
blocked on the outside by a company firewall. Few people realize that if you can
get out to the world through a firewall, then it is relatively easy to open a hole
so that the world can come into you.
In its crudest form, this is called "poking a hole in the firewall." I'll call
it an SSH back door. To use it, you'll need a machine on the Internet that
you can use as an intermediary.
In our example, we'll call our machine blackbox.example.com. The machine behind
the company firewall is called ginger. Finally, the machine that technical support
is on will be called tech. Figure 4 explains how this is set up.
Check that what you're doing is allowed, but make sure you ask the right
people. Most people will cringe that you're opening the firewall, but what they
don't understand is that it is completely encrypted. Furthermore, someone would
need to hack your outside machine before getting into your company. Instead,
you may belong to the school of "ask-for-forgiveness-instead-of-permission."
Either way, use your judgment and don't blame me if this doesn't go your way.
SSH from ginger to blackbox.example.com with the -R flag. I'll
assume that you're the root user on ginger and that tech will need the root
user ID to help you with the system. With the -R flag, you'll forward
instructions of port 2222 on blackbox to port 22 on ginger. This is how you
set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're
not putting ginger out on the Internet naked.
VNC or virtual network computing has been around a long time. I typically find
myself needing to use it when the remote server has some type of graphical program
that is only available on that server.
For example, suppose in Trick 5, ginger
is a storage server. Many storage devices come with a GUI program to manage the
storage controllers. Often these GUI management tools need a direct connection to
the storage through a network that is at times kept in a private subnet. Therefore,
the only way to access this GUI is to do it from ginger.
You can try SSH'ing to ginger with the -X option and launch it that
way, but many times the bandwidth required is too much and you'll get frustrated
waiting. VNC is a much more network-friendly tool and is readily available for nearly
all operating systems.
Let's assume that the setup is the same as in Trick 5, but you want tech to be
able to get VNC access instead of SSH. In this case, you'll do something similar
but forward VNC ports instead. Here's what you do:
Start a VNC server session on ginger. This is done by running something
like:
The options tell the VNC server to start up with a resolution of 1024x768
and a pixel depth of 24 bits per pixel. If you are using a really slow connection
setting, 8 may be a better option. Using :99 specifies the port
the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying
:99 means the server is accessible from port 5999.
When you start the session, you'll be asked to specify a password. The user
ID will be the same user that you launched the VNC server from. (In our case,
this is root.)
SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox
to ginger. This is done from ginger by running the command:
Once you run this command, you'll need to keep this SSH session open in order
to keep the port forwarded to ginger. At this point if you were on blackbox,
you could now access the VNC session on ginger by just running:
thedude@blackbox:~$ vncviewer localhost:99
That would forward the port through SSH to ginger. But we're interested in
letting tech get VNC access to ginger. To accomplish this, you'll need another
tunnel.
From tech, you open a tunnel via SSH to forward your port 5999 to port 5999
on blackbox. This would be done by running:
This time the SSH flag we used was -L, which instead of pushing
5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to
leave this session open. Now you're ready to VNC from tech!
From tech, VNC to ginger by running the command:
root@tech:~# vncviewer localhost:99 .
Tech will now have a VNC session directly to ginger.
While the effort might seem like a bit much to set up, it beats flying across
the country to fix the storage arrays. Also, if you practice this a few times, it
becomes quite easy.
Let me add a trick to this trick: If tech was running the Windows operating
system and didn't have a command-line SSH client, then tech can run Putty. Putty
can be set to forward SSH ports by looking in the options in the sidebar. If the
port were 5902 instead of our example of 5999, then you would enter something like
in Figure 5.
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted
by a client node named beckham. Company A has decided they really want to get more
bandwidth out of ginger because they have lots of nodes they want to have NFS mount
ginger's shared filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs
together. This is cheapest because usually you have an extra on-board NIC and an
extra port on your switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come
from? Well,
You'll need to install it on a shared filesystem that both ginger and beckham
can see. or compile and install on both nodes. I'll compile it in the home directory
of the bob user that is viewable on both nodes:
tar zxvf iperf*gz cd iperf-2.0.2 ./configure -prefix=/home/bob/perf make make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
You'll see output in both screens telling you what the speed is. On a normal
server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This
is normal as bandwidth is lost in the TCP stack and physical cables. By connecting
two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still,
this gives you a good indication that your bandwidth is going to be about what you'd
expect. If you see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs
that used different drivers. The performance was extremely poor, leading to about
20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet
cards together!
A Linux systems administrator becomes more efficient by using command-line scripting
with authority. This includes crafting loops and knowing how to parse data using
utilities like awk, grep, and sed. There
are many cases where doing so takes fewer keystrokes and lessens the likelihood
of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster
that you are about to install. The long way would be to add IP addresses in vi or
your favorite text editor. However, it can be done by taking the already existing
/etc/hosts file and appending the following to it by running this on the command
line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P
+ 1); done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses
192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the
risk of inadvertently creating duplicate IP addresses or host names, so this is
a good example of using the built-in command line to eliminate user errors. Please
note that this is done in the bash shell, the default in most Linux distributions.
As another example, let's suppose you want to check that the memory size is the
same in each of the compute nodes in the Linux cluster. In most cases of this sort,
having a distributed or parallel shell would be the best practice, but for the sake
of illustration, here's a way to do this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print
$2}'; done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular
expressions in it.) Let's pick it apart and uncover the mystery.
First you're doing a loop through 001-200. This padding with 0s in the front
is done with the -w option to the seq command. Then you
substitute the num variable to create the host you're going to SSH
to. Once you have the target host, give the command to it. In this case, it's:
free -m | grep Mem | awk '{print $2}'
That command says to:
Use the free command to get the memory size in megabytes.
Take the output of that command and use grep to get the line
that has the string Mem in it.
Take that line and use awk to print the second field, which
is the total memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200
nodes is piped (|d) to the sort command so that all the
memory values are sorted.
Finally, you eliminate duplicates with the uniq command. This command
will result in one of the following cases:
If all the nodes, n001-n200, have the same memory size, then only one number
will be displayed. This is the size of memory as seen by each operating system.
If node memory size is different, you will see several memory size values.
Finally, if the SSH failed on a certain node, then you may see some error
messages.
This command isn't perfect. If you find that a value of memory is different than
what you expect, you won't know on which node it was or how many nodes there were.
Another command may need to be issued for that.
What this trick does give you, though, is a fast way to check for something and
quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty
check.
Some software prints error messages to the console that may not necessarily show
up on your SSH session. Using the vcs devices can let you examine these. From within
an SSH session, run the following command on a remote server: # cat /dev/vcs1.
This will show you what is on the first console. You can also look at the other
virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll
be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over
LAN is the best way to view this information; it also provides the additional benefit
of out-of-band viewing capabilities. Using the vcs device provides a fast in-band
method that may be able to save you some time from going to the machine room and
looking at the console.
In Trick 8, you saw an example of using
the command line to get information about the total memory in the system. In this
trick, I'll offer up a few other methods to collect important information from the
system you may need to verify, troubleshoot, or give to remote support.
First, let's gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo .
This command gives you information on the processor speed, quantity, and model.
Using grep in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the
system. So, if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l .
I would then expect to see 8 as the value. If I don't, I call up the vendor and
tell them to send me another processor.
Another piece of information I may require is disk information. This can be gotten
with the df command. I usually add the -h flag so that
I can see the output in gigabytes or megabytes. # df -h also shows
how the disk was partitioned.
And to end the list, here's a way to look at the firmware of your system-a method
to get the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode command. Unfortunately,
you can't easily grep for the information, so piping it is a less efficient
way to do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less ... BIOS Information Vendor: LENOVO Version: 7LET52WW (1.22 ) Release Date: 08/27/2007 ...
This is much more efficient than rebooting your machine and looking at the POST
output.
To examine the driver and firmware versions of your Ethernet adapter, run
ethtool:
There are thousands of tricks you can learn from someone's who's an expert at
the command line. The best ways to learn are to:
Work with others. Share screen sessions and watch how others work-you'll
see new approaches to doing things. You may need to swallow your pride and let
other people drive, but often you can learn a lot.
Read the man pages. Seriously; reading man pages, even on commands you know
like the back of your hand, can provide amazing insights. For example, did you
know you can do network programming with awk?
Solve problems. As the system administrator, you are always solving problems
whether they are created by you or by others. This is called experience, and
experience makes you better and more efficient.
I hope at least one of these tricks helped you learn something you didn't know.
Essential tricks like these make you more efficient and add to your experience,
but most importantly, tricks give you more free time to do more interesting things,
like playing video games. And the best administrators are lazy because they don't
like to work. They find the fastest way to do a task and finish it quickly so they
can continue in their lazy pursuits.
Vallard Benincosa is a lazy Linux Certified IT professional
working for the IBM Linux Clusters team. He lives in Portland, OR, with
his wife and two kids.
The slogan of the Bropages
utility is just get to the point . It is true! The bropages are just like man pages, but it will
display examples only. As its slogan says, It skips all text part and gives you the concise
examples for command line programs. The bropages can be easily installed using gem . So, you need
Ruby 1.8.7+ installed on your machine for this to work. To install Ruby on Rails in CentOS and
Ubuntu, refer the following guide: The slogan of the Bropages utility is just get to the point .
It is true!
The bropages are just like man pages, but it will display examples only. As its
slogan says, It skips all text part and gives you the concise examples for command line programs.
The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work...After After installing gem, all
you have to do to install bro pages is:
$ gem install bropages
... The usage is incredibly easy! ...just type:
$ bro find
... The good thing thing is you can upvote or downvote the examples.
As you see in the above screenshot, we can upvote to first command by entering the following
command: As you see in the above screenshot, we can upvote to first command by entering the
following command:
$ bro thanks
You will be asked to enter your Email Id. Enter a valid Email to receive the verification
code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The
highest upvoted examples will be shown at the top. You will be asked to enter your Email Id. Enter
a valid Email to receive the verification code. And, copy/paste the verification code in the prompt
and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top.
Bropages.org requires an email address verification to do this
What's your email address?
[email protected]
Great! We're sending an email to [email protected]
Please enter the verification code: apHelH13ocC7OxTyB7Mo9p
Great! You're verified! FYI, your email and code are stored locally in ~/.bro
You just gave thanks to an entry for find!
You rock!
Cheat is another useful alternative to man pages to learn Unix commands. It
allows you to create and view interactive Linux/Unix commands cheatsheets on the command-line.
The recommended way to install Cheat is using Pip package manager.,,,
... ... ...
Cheat usage is trivial.
$ cheat find
You will be presented with the list of available examples of find command:
... ... ...
To view help section, run: To view help section, run:
$ cheat -h
For more details, see project's GitHub repository: For more details, see project's GitHub
repository:
TLDR is a collection of simplified and community-driven man pages.
Unlike man pages, TLDR pages focuses only on practical examples. TLDR can be installed using npm
. So, you need NodeJS installed on your machine for this to work.
To install NodeJS in Linux, refer the following guide. To install NodeJS in Linux, refer the
following guide.
After installing npm, run the following command to install tldr. After installing npm, run
the following command to install tldr.
$ npm install -g tldr
TLDR clients are also available for Android. Install any one of below apps from Google Play
Sore and access the TLDR pages from your Android devices. TLDR clients are also available for
Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your
Android devices.
There are many TLDR clients available. You can view them all
here
3.1. Usage To display the documentation of any command, fro example find , run:
$ tldr find
You will see the list of available examples of find command.
...To view the list of all commands in the cache,
run: To view the list of all commands in the cache, run:
$ tldr --list-all
...To update the local cache, run: To update the local cache, run: To update the
local cache, run:
$ tldr -u
Or, Or,
$ tldr --update
To display the help section, run: To display the help section, run:
Tldr++ is yet another client to access the TLDR pages. Unlike the other
Tldr clients, it is fully interactive .
5. Tealdeer
Tealdeer is a fast, un-official tldr client that allows you to access and
display Linux commands cheatsheets in your Terminal. The developer of Tealdeer claims it is very
fast compared to the official tldr client and other community-supported tldr clients.
6. tldr.jsx web client
The tldr.jsx is a a Reactive web client for tldr-pages. If you
don't want to install anything on your system, you can try this client online from any
Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just
a Web-browser. Open a web browser and navigate to The tldr.jsx is a a Reactive web client for
tldr-pages. If you don't want to install anything on your system, you can try this client online
from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to
do is just a Web-browser. Open a web browser and navigate to Open a web browser and navigate to
Open a web browser and navigate to https://tldr.ostera.io/ page.
7. Navi interactive commandline cheatsheet
tool
Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro
pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all
other comprehensive text parts. For more details, check the following link. Navi is an
interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools,
Navi also provides a list of examples for a given command, skipping all other comprehensive text
parts. For more details, check the following link.
I came across this utility recently and I thought that it would be a worth
addition to this list. Say hello to Manly , a compliment to man pages. Manly is written in Python
, so you can install it using Pip package manager.
Manly is slightly different from the above three
utilities. It will not display any examples and also you need to mention the flags or options along
with the commands. Say for example, the following example won't work:
$ manly dpkg
But, if you mention any flag/option of a command, you will get a small description of the given command and its
options.
$ manly dpkg -i -R
View Linux
$ manly --help
And also take a look at the project's GitHub page. And also take a look at the project's
GitHub page.
... one of the most popular (and reliable ways) to backup your data with Clonezilla. This tool lets you clone your Linux install.
With it, you can load a live USB and easily "clone" hard drives, operating systems and more..
Downloading Clonezilla Clonezilla is available only as a live operating system. There are multiple versions of the live disk.
That being said, we recommend just downloading the ISO file. The stable version of the software is available at Clonezilla.org. On
the download page, select your CPU architecture from the dropdown menu (32 bit or 64 bit).
Then, click "filetype" and click ISO. After all of that, click the download button.
How to get the new Spotlight-like Microsoft launcher on Windows 10 Pause Unmute Remaining Time -0:36 Making The Live Disk Regardless
of the operating system, the fastest and easiest way to make a Linux live-disk is with the Etcher USB imaging tool. Head over to
this page to download it. Follow the instructions on the page, as it will explain the three-step process it takes to make a live
disk.
Note: Clonezilla ISO is under 300 MiB in size. As a result, any flash drive with at least 512 MiB of space will work.
Device To Image Cloning Backing up a Linux installation directly to an image file with Clonezilla is a simple process. To start
off, select the "device-image" option in the Clonezilla menu. On the next page, the software gives a whole lot of different ways
to create the backup.
The hard drive image can be saved to a Samba server, an SSH server, NFS and etc. If you're savvy with any of these, select it.
If you're a beginner, connect a USB hard drive (or mount a second hard drive connected to the PC) and select the "local_dev" option.
Selecting "local_dev" prompts Clonezilla to ask the user to set up a hard drive as the destination for the hard drive menu. Look
through the listing and select the hard drive you'd like to use. Additionally, use the menu selector to choose what directory on
the drive the hard drive image will save to.
With the storage location set up, the process can begin. Clonezilla asks to run the backup wizard. There are two options: "Beginner"
and "Expert". Select "Beginner" to start the process.
On the next page, tell Clonezilla how to save the hard drive. Select "savedisk" to copy the entire hard drive to one file. Select
"saveparts" to backup the drive into separate partition images.
Restoring Backup Images To restore an image, load Clonezilla and select the "device-image" option. Next, select "local_dev". Use
the menu to select the hard drive previously used to save the hard drive image. In the directory browser, select the same options
you used to create the image.
This file contains encrypted or ' shadowed ' passwords for group accounts and, for security
reasons, cannot be accessed by regular users. It's only readable by the root user and users
with sudo privileges.
$ sudo cat /etc/gshadow
tecmint:!::
From the far left, the file contains the following fields:
I keep happening on these mentions of manufacturing jobs succumbing to automation, and I
can't think of where these people are getting their information.
I work in manufacturing. Production manufacturing, in fact, involving hundreds, thousands,
tens of thousands of parts produced per week. Automation has come a long way, but it also
hasn't. A layman might marvel at the technologies while taking a tour of the factory, but
upon closer inspection, the returns are greatly diminished in the last two decades. Advances
have afforded greater precision, cheaper technologies, but the only reason China is a giant
of manufacturing is because labor is cheap. They automate less than Western factories, not
more, because humans cost next to nothing, but machines are expensive.
Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time
to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding
themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.
Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company!
You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and
easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the
recent
survey
by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues.
Or perhaps you've stumbled on
this
analysis
that said that small businesses are losing money as a result of ineffective IT management that could be much improved
by the use of cloud based services. Or the
poll
of
more than 1,200 small businesses by technology reseller
CDW
which
discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all
industries, storage and conferencing and collaboration are the top cloud services and applications."
So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.
There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed
around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing
power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs,
ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in
Fast
Company
and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core
of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies
doing this year when it comes time to replace their servers?
These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are
not going to the cloud. Instead, they've been buying new servers.
Wait, buying new servers? What about the cloud?
At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible,
conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were
showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old
machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a
change. So they all moved to cloud, right?
Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to
the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening
in the world of established companies.
Consider the options. All of my clients' evaluated cloud based hosting services from
Amazon
,
Microsoft
and
Rackspace
.
They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office,
accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as
evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price:
approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a
cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.
You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company
who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and
enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service
contract with a reliable local IT firm.
Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers
can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business
owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.
For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But
can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their
servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per
hour. That's still way below using a cloud service.
No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses
who don't want to change their existing applications, is still just too expensive.
Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup
and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys,
they subscribe to online backup services, share files on DropBox and
Microsoft
's
file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like
Google
Docs
or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more
productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.
Paul-Henri Ferrand, President of
Dell
North
America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told
me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As
more business applications become more available for the cloud, the more likely the trend will continue."
He's right. Over the next few years the costs will come down. Your beloved internal application will become out of date and your
only option will be to migrate to a cloud based application (hopefully provided by the same vendor to ease the transition). Your
technology partners will help you and the process will be easier, and less expensive than today. But for now, you may find it makes
more sense to just buy a new server. It's OK. You're not alone.
GNU Screen's basic usage is simple. Launch it with the screen command, and
you're placed into the zeroeth window in a Screen session. You may hardly notice anything's
changed until you decide you need a new prompt.
When one terminal window is occupied with an activity (for instance, you've launched a text
editor like Vim or Jove ,
or you're processing video or audio, or running a batch job), you can just open a new one. To
open a new window, press Ctrl+A , release, and then press c . This creates a new window on top
of your existing window.
You'll know you're in a new window because your terminal appears to be clear of anything
aside from its default prompt. Your other terminal still exists, of course; it's just hiding
behind the new one. To traverse through your open windows, press Ctrl+A , release, and then n
for next or p for previous . With just two windows open, n and p functionally do
the same thing, but you can always open more windows ( Ctrl+A then c ) and walk through
them.
Split screen
GNU Screen's default behavior is more like a mobile device screen than a desktop: you can
only see one window at a time. If you're using GNU Screen because you love to multitask, being
able to focus on only one window may seem like a step backward. Luckily, GNU Screen lets you
split your terminal into windows within windows.
To create a horizontal split, press Ctrl+A and then s . This places one window above
another, just like window panes. The split space is, however, left unpurposed until you tell it
what to display. So after creating a split, you can move into the split pane with Ctrl+A and
then Tab . Once there, use Ctrl+A then n to navigate through all your available windows until
the content you want to be displayed is in the split pane.
You can also create vertical splits with Ctrl+A then | (that's a pipe character, or the
Shift option of the \ key on most keyboards).
The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and
community-driven man pages. It's an effort to make the experience of using man pages simpler
with the help of practical examples. For those who don't know, TLDR is taken from common
internet slang Too Long Didn't Read .
In case you wish to compare, let's take the example of tar command. The usual man page
extends over 1,000 lines. It's an archiving utility that's often combined with a compression
method like bzip or gzip. Take a look at its man page:
On the other hand, TLDR pages lets you simply take a glance at the
command and see how it works. Tar's TLDR page simply looks like this and comes with some handy
examples of the most common tasks you can complete with this utility:
Let's take another example and show you what TLDR pages has to
offer when it comes to apt:
Having shown you how TLDR works and makes your life easier, let's
tell you how to install it on your Linux-based operating system.
How to install and use
TLDR pages on Linux?
The most mature TLDR client is based on Node.js and you can install it easily using NPM
package manager. In case Node and NPM are not available on your system, run the following
command:
sudo apt-get install nodejs
sudo apt-get install npm
In case you're using an OS other than Debian, Ubuntu, or Ubuntu's derivatives, you can use
yum, dnf, or pacman package manager as per your convenience.
When we need help in Linux command line, man is usually the first friend we
check for more information. But it became my second line support after I met other
alternatives, e.g. tldr , cheat and eg .
tldr
tldr stands for too long
didn't read , it is a simplified and community-driven man pages. Maybe we forget the arguments
to a command, or just not patient enough to read the long man document, here
tldr comes in, it will provide concise information with examples. And I even
contributed a couple of lines code myself to help a little bit with the project on Github. It
is very easy to install: npm install -g tldr , and there are many clients
available to pick to be able to access the tldr pages. E.g. install Python client
with pip install tldr ,
To display help information, run tldr -h or tldr tldr .
Take curl as an example
tldr++
tldr++ is an interactive
tldr client written with go, I just steal the gif from its official site.
cheat
Similarly, cheat allows you to
create and view interactive cheatsheets on the command-line. It was designed to help remind
*nix system administrators of options for commands that they use frequently, but not frequently
enough to remember. It is written in Golang, so just download the binary and add it into your
PATH .
eg
eg provides useful examples with
explanations on the command line.
So I consult tldr , cheat or eg before I ask
man and Google.
In our daily use of Linux/Unix systems, we use many command-line tools to complete our work
and to understand and manage our systems -- tools like du to monitor disk
utilization and top to show system resources. Some of these tools have existed for
a long time. For example, top was first released in 1984, while du 's
first release dates to 1971.
Over the years, these tools have been modernized and ported to different systems, but, in
general, they still follow their original idea, look, and feel.
These are great tools and essential to many system administrators' workflows. However, in
recent years, the open source community has developed alternative tools that offer additional
benefits. Some are just eye candy, but others greatly improve usability, making them a great
choice to use on modern systems. These include the following five alternatives to the standard
Linux command-line tools.
1. ncdu as a replacement for du
The NCurses Disk Usage ( ncdu ) tool provides similar results to
du but in a curses-based, interactive interface that focuses on the directories
that consume most of your disk space. ncdu spends some time analyzing the disk,
then displays the results sorted by your most used directories or files, like this:
ncdu
1.14.2 ~ Use the arrow keys to navigate, press ? for help
--- /home/rgerardi ------------------------------------------------------------
96.7 GiB [##########] /libvirt
33.9 GiB [### ] /.crc
...
Total disk usage: 159.4 GiB Apparent size: 280.8 GiB Items: 561540
Navigate to each entry by using the arrow keys. If you press Enter on a directory entry,
ncdu displays the contents of that directory:
You can use that to drill down into the directories and find which files are consuming the
most disk space. Return to the previous directory by using the Left arrow key. By default, you
can delete files with ncdu by pressing the d key, and it asks for confirmation
before deleting a file. If you want to disable this behavior to prevent accidents, use the
-r option for read-only access: ncdu -r .
ncdu is available for many platforms and Linux distributions. For example, you
can use dnf to install it on Fedora directly from the official repositories:
$ sudo dnf install ncdu
You can find more information about this tool on the ncdu web page .
2. htop as a replacement
for top
htop is an interactive process viewer similar to top but that
provides a nicer user experience out of the box. By default, htop displays the
same metrics as top in a pleasant and colorful display.
In addition, htop provides system overview information at the top and a command
bar at the bottom to trigger commands using the function keys, and you can customize it by
pressing F2 to enter the setup screen. In setup, you can change its colors, add or remove
metrics, or change display options for the overview bar.
While you can configure recent versions of top to achieve similar results,
htop provides saner default configurations, which makes it a nice and easy to use
process viewer.
To learn more about this project, check the htop home page .
3. tldr as a replacement for
man
The tldr command-line tool displays simplified command utilization information,
mostly including examples. It works as a client for the community tldr pages project .
This tool is not a replacement for man . The man pages are still the canonical
and complete source of information for many tools. However, in some cases, man is
too much. Sometimes you don't need all that information about a command; you're just trying to
remember the basic options. For example, the man page for the curl command has
almost 3,000 lines. In contrast, the tldr for curl is 40 lines long
and looks like this:
$ tldr curl
# curl
Transfers data from or to a server.
Supports most protocols, including HTTP, FTP, and POP3.
More information: < https: // curl.haxx.se > .
- Download the contents of an URL to a file:
curl http: // example.com -o filename
- Download a file , saving the output under the filename indicated by the URL:
curl -O http: // example.com / filename
- Download a file , following [ L ] ocation redirects, and automatically [ C ] ontinuing (
resuming ) a previous file transfer:
curl -O -L -C - http: // example.com / filename
- Send form-encoded data ( POST request of type ` application / x-www-form-urlencoded ` )
:
curl -d 'name=bob' http: // example.com / form
- Send a request with an extra header, using a custom HTTP method:
curl -H 'X-My-Header: 123' -X PUT http: // example.com
- Send data in JSON format, specifying the appropriate content-type header:
TLDR stands for "too long; didn't read," which is internet slang for a summary of long text.
The name is appropriate for this tool because man pages, while useful, are sometimes just too
long.
In Fedora, the tldr client was written in Python. You can install it using
dnf . For other client options, consult the tldr pages project .
In general, the tldr tool requires access to the internet to consult the tldr
pages. The Python client in Fedora allows you to download and cache these pages for offline
access.
For more information on tldr , you can use tldr tldr .
4. jq
as a replacement for sed/grep for JSON
jq is a command-line JSON processor. It's like sed or
grep but specifically designed to deal with JSON data. If you're a developer or
system administrator who uses JSON in your daily tasks, this is an essential tool in your
toolbox.
The main benefit of jq over generic text-processing tools like
grep and sed is that it understands the JSON data structure, allowing
you to create complex queries with a single expression.
To illustrate, imagine you're trying to find the name of the containers in this JSON
file:
grep returned all lines that contain the word name . You can add a
few more options to grep to restrict it and, with some regular-expression
manipulation, you can find the names of the containers. To obtain the result you want with
jq , use an expression that simulates navigating down the data structure, like
this:
This command gives you the name of both containers. If you're looking for only the name of
the second container, add the array element index to the expression:
Because jq is aware of the data structure, it provides the same results even if
the file format changes slightly. grep and sed may provide different
results with small changes to the format.
jq has many features, and covering them all would require another article. For
more information, consult the jq project page , the man pages, or
tldr jq .
5. fd as a replacement for find
fd is a simple and fast alternative to the find command. It does
not aim to replace the complete functionality find provides; instead, it provides
some sane defaults that help a lot in certain scenarios.
For example, when searching for source-code files in a directory that contains a Git
repository, fd automatically excludes hidden files and directories, including the
.git directory, as well as ignoring patterns from the .gitignore
file. In general, it provides faster searches with more relevant results on the first try.
By default, fd runs a case-insensitive pattern search in the current directory
with colored output. The same search using find requires you to provide additional
command-line parameters. For example, to search all markdown files ( .md or
.MD ) in the current directory, the find command is this:
$ find . -iname "*.md"
Here is the same search with fd :
$ fd .md
In some cases, fd requires additional options; for example, if you want to
include hidden files and directories, you must use the option -H , while this is
not required in find .
fd is available for many Linux distributions. Install it in Fedora using the
standard repositories:
Another (fancy looking) alternative for ls is lsd. Miguel Perez on 25 Jun 2020
Bat instead of cat, ripgrep instead of grep, httpie instead of curl, bashtop instead of
htop, autojump instead of cd... Drto on 25 Jun 2020
ack instead of grep for files. Million times faster.
Gordon Harris on 25 Jun 2020
The yq command line utility is useful too. It's just like jq, except for yaml files and has
the ability to convert yaml into json.
Matt howard on 26 Jun 2020
Glances is a great top replacement too Paul M on 26 Jun 2020
Try "mtr" instead of traceroute
Try "hping2" instead of ping
Try "pigz" instead of gzip jmtd on 28 Jun 2020
You run a separate "duc index" command to capture disk space usage in a database file and
then can explore the data very quickly with "duc ui" ncurses ui. There's also GUI and web
front-ends that give you a nice graphical pie chart interface.
In my experience the index stage is faster than plain du. You can choose to re-index only
certain folders if you want to update some data quickly without rescanning everything.
wurn on 29 Jun 2020
Imho, jq uses a syntax that's ok for simple queries but quickly becomes horrible when you
need more complex queries. Pjy is a sensible replacement for jq, having an (improved) python
syntax which is familiar to many people and much more readable: https://github.com/hydrargyrum/pjy
Jack Orenstein on 29 Jun 2020
Also along the lines of command-line alternatives, take a look at marcel, which is a modern
shell: https://marceltheshell.org .
The basic idea is to pipe Python values instead of strings, between commands. It integrates
smoothly with host commands (and, presumably, the alternatives discussed here), and also
integrates remote access and database access. Ricardo Fraile on 05 Jul 2020
"tuptime" instead of "uptime".
It tracks the history of the system, not only the current one. The Cube on 07 Jul 2020
One downside of all of this is that there are even more things to remember. I learned find,
diff, cat, vi (and ed), grep and a few others starting in 1976 on 6th edition. They have been
enhanced some, over the years (for which I use man when I need to remember), and learned top
and other things as I needed them, but things I did back then still work great now. KISS is
still a "thing". Especially in scripts one is going to use on a wide variety of distributions
or for a long time. These kind of tweaks are fun and all, but add complexity and reduce one's
inter-system mobility. (And don't get me started on systemd 8P).
The replace utility program changes strings in place in files or
on the standard input.
Invoke replace in one of the following ways:
shell> replace from to [from to] ... -- file_name [file_name] ...
shell> replace from to [from to] ... < file_name
from represents a string to look for and to represents its
replacement. There can be one or more pairs of strings.
Use the -- option to indicate where the string-replacement list
ends and the file names begin. In this case, any file named on
the command line is modified in place, so you may want to make a
copy of the original before converting it. replace prints a
message indicating which of the input files it actually modifies.
If the -- option is not given, replace reads the standard input
and writes to the standard output.
replace uses a finite state machine to match longer strings
first. It can be used to swap strings. For example, the following
command swaps a and b in the given files, file1 and file2:
shell> replace a b b a -- file1 file2 ...
The replace program is used by msql2mysql. See msql2mysql(1).
replace supports the following options.
• -?, -I
Display a help message and exit.
• -#debug_options
Enable debugging.
• -s
Silent mode. Print less information what the program does.
• -v
Verbose mode. Print more information about what the program
does.
• -V
Display version information and exit.
Copyright 2007-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.,
2010-2015 MariaDB Foundation
This documentation is free software; you can redistribute it
and/or modify it only under the terms of the GNU General Public
License as published by the Free Software Foundation; version 2
of the License.
This documentation is distributed in the hope that it will be
useful, but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with the program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1335 USA or see http://www.gnu.org/licenses/.
This page is part of the MariaDB (MariaDB database server)
project. Information about the project can be found at
⟨http://mariadb.org/⟩. If you have a bug report for this manual
page, see ⟨https://mariadb.com/kb/en/mariadb/reporting-bugs/⟩.
This page was obtained from the project's upstream Git repository
⟨https://github.com/MariaDB/server⟩ on 2021-04-01. (At that
time, the date of the most recent commit that was found in the
repository was 2020-11-03.) If you discover any rendering
problems in this HTML version of the page, or you believe there
is a better or more up-to-date source for the page, or you have
corrections or improvements to the information in this COLOPHON
(which is not part o
Eg is a free, open source program written in Python language and the code is freely available in GitHub. For those wondering,
eg comes from the Latin word "Exempli Gratia" that
literally means "for the sake of example" in English. Exempli Gratia is known by its abbreviation e.g. , in English speaking countries.
Install Eg in Linux
Eg can be installed using Pip package manager. If Pip is not available in your system, install it as described in the below link.
After installing Pip, run the following command to install eg on your Linux system:
$ pip install eg
Display Linux commands cheatsheets using Eg
Let us start by displaying the help section of eg program. To do so, run eg without any options:
$ eg
Sample output:
usage: eg [-h] [-v] [-f CONFIG_FILE] [-e] [--examples-dir EXAMPLES_DIR]
[-c CUSTOM_DIR] [-p PAGER_CMD] [-l] [--color] [-s] [--no-color]
[program]
eg provides examples of common command usage.
positional arguments:
program The program for which to display examples.
optional arguments:
-h, --help show this help message and exit
-v, --version Display version information about eg
-f CONFIG_FILE, --config-file CONFIG_FILE
Path to the .egrc file, if it is not in the default
location.
-e, --edit Edit the custom examples for the given command. If
editor-cmd is not set in your .egrc and $VISUAL and
$EDITOR are not set, prints a message and does
nothing.
--examples-dir EXAMPLES_DIR
The location to the examples/ dir that ships with eg
-c CUSTOM_DIR, --custom-dir CUSTOM_DIR
Path to a directory containing user-defined examples.
-p PAGER_CMD, --pager-cmd PAGER_CMD
String literal that will be invoked to page output.
-l, --list Show all the programs with eg entries.
--color Colorize output.
-s, --squeeze Show fewer blank lines in output.
--no-color Do not colorize output.
You can also bring the help section using this command too:
$ eg --help
Now let us see how to view example commands usage.
To display cheatsheet of a Linux command, for example grep , run:
$ eg grep
Sample output:
grep
print all lines containing foo in input.txt
grep "foo" input.txt
print all lines matching the regex "^start" in input.txt
grep -e "^start" input.txt
print all lines containing bar by recursively searching a directory
grep -r "bar" directory
print all lines containing bar ignoring case
grep -i "bAr" input.txt
print 3 lines of context before and after each line matching "foo"
grep -C 3 "foo" input.txt
Basic Usage
Search each line in input_file for a match against pattern and print
matching lines:
grep "<pattern>" <input_file>
[...]
The 109-year-old firm is preparing to split itself into two public companies, with the
namesake firm narrowing its focus on the so-called hybrid cloud, where it sees a $1 trillion
market opportunity.
Before using the
locate
command you should
check if it is installed in your machine. A
locate
command
comes with GNU findutils or GNU mlocate packages. You can simply run the following command to check if
locate
is
installed or not.
$ which locate
If
locate
is not installed by default then
you can run the following commands to install.
Once the installation is completed you need to run the following command to update the
locate
database
to quickly get the file location. That's how your result is faster when you use the
locate
command
to find files in Linux.
$ sudo updatedb
The
mlocate
db file is located at
/var/lib/mlocate/mlocate.db
.
$ ls -l /var/lib/mlocate/mlocate.db
A good place to start and get to know about
locate
command
is using the man page.
$ man locate
How to Use locate Command to Find Files Faster in Linux
To search for any files simply pass the file name as an argument to
locate
command.
$ locate .bashrc
If you wish to see how many matched items instead of printing the location of the file you can pass the
-c
flag.
$ sudo locate -c .bashrc
By default
locate
command is set to be case
sensitive. You can make the search to be case insensitive by using the
-i
flag.
$ sudo locate -i file1.sh
You can limit the search result by using the
-n
flag.
$ sudo locate -n 3 .bashrc
When you
delete
a file
and if you did not update the
mlocate
database
it will still print the deleted file in output. You have two options now either to update
mlocate
db
periodically or use
-e
flag
which will skip the deleted files.
$ locate -i -e file1.sh
You can check the statistics of the
mlocate
database
by running the following command.
$ locate -S
If your
db
file is in a different location
then you may want to use
-d
flag
followed by
mlocate
db path and filename to
be searched for.
$ locate -d [ DB PATH ] [ FILENAME ]
Sometimes you may encounter an error, you can suppress the error messages by running the command with the
-q
flag.
$ locate -q [ FILENAME ]
That's it for this article. We have shown you all the basic operations you can do with
locate
command.
It will be a handy tool for you when working on the command line.
More content below
More content below
More content below
More content below
Brian Sozzi
Editor-at-Large
Mon, April 12, 2021, 12:54 PM
West Virginia is opening up its arms -- and importantly its wallet -- to lure in those likely to be working from home for some
time after the
COVID-19
pandemic
.
The state announced on Monday it would give people $12,000 cash with no strings attached to move to its confines. Also
included is one year of free recreation at the state's various public lands, which it values at $2,500. Once all the
particulars of the plan are added up, West Virginia says the total value to a person is $20,000.
The initiative is being made possible after a $25 million donation from Intuit's executive chairman (and former long-time
CEO)
Brad
D. Smith
and his wife Alys.
"I have the opportunity to spend a lot of time speaking with my peers in the industry in Silicon Valley as well as across the
world. Most are looking at a hybrid model, but many of them -- if not all of them -- have expanded the percentage of their
workforce that can work full-time remotely," Smith told
Yahoo
Finance Live
about the plan.
Smith earned his bachelor's degree in business administration from Marshall University in West Virginia.
Added Smith, "I think we have seen the pendulum swing all the way to the right when everyone had to come to the office and
then all the way to left when everyone was forced to shelter in place. And somewhere in the middle, we'll all be experimenting
in the next year or so to see where is that sweet-spot. But I do know employees now have gotten a taste for what it's like to
be able to live in a new area with less commute time, less access to outdoor amenities like West Virginia has to offer. I
think that's absolutely going to become part of the consideration set in this war for talent."
That war for talent post-pandemic could be about to heat up within corporate America, and perhaps spur states to follow West
Virginia's lead.
The likes of Facebook, Twitter and Apple are among those big companies poised to have hybrid workforces for years after the
pandemic. That has some employees considering moves to lower cost states and those that offer better overall qualities of
life.
A
recent
study
out of Gartner found that 82% of respondents intend to permit remote working some of the time as employees return to
the workplace. Meanwhile, 47% plan to let employees work remotely permanently.
Xargs
, along with the
find
command,
can also be used to copy or move a set of files from one directory to another. For example, to move all the text files that are more
than 10 minutes old from the current directory to the previous directory, use the following command:
The
-I
command
line option is used by the
xargs
command
to define a replace-string which gets replaced with names read from the output of the
find
command.
Here the replace-string is
{}
,
but it could be anything. For example, you can use "file" as a replace-string.
Suppose you want to list the details of all the .txt files present in the current directory. As already explained, it can be easily
done using the following command:
find . -name "*.txt" | xargs ls -l
But there is one problem: the
xargs
command
will execute the
ls
command
even if the
find
command
fails to find any .txt file. The following is an example.
When you are writing a bash script, there are situations where you need to generate a
sequence of numbers or strings . One common use of such sequence data is for loop iteration.
When you iterate over a range of numbers, the range may be defined in many different ways
(e.g., [0, 1, 2,..., 99, 100], [50, 55, 60,..., 75, 80], [10, 9, 8,..., 1, 0], etc). Loop
iteration may not be just over a range of numbers. You may need to iterate over a sequence of
strings with particular patterns (e.g., incrementing filenames; img001.jpg, img002.jpg,
img003.jpg). For this type of loop control, you need to be able to generate a sequence of
numbers and/or strings flexibly.
While you can use a dedicated tool like seq to generate a range of numbers, it
is really not necessary to add such external dependency in your bash script when bash itself
provides a powerful built-in range function called brace expansion . In this tutorial, let's
find out how to generate a sequence of data in bash using brace expansion and what are useful
brace expansion examples .
Brace Expansion
Bash's built-in range function is realized by so-called brace expansion . In a nutshell,
brace expansion allows you to generate a sequence of strings based on supplied string and
numeric input data. The syntax of brace expansion is the following.
All these sequence expressions are iterable, meaning you can use them for while/for loops . In the rest
of the tutorial, let's go over each of these expressions to clarify their use
cases.
The first use case of brace expansion is a simple string list, which is a comma-separated
list of string literals within the braces. Here we are not generating a sequence of data, but
simply list a pre-defined sequence of string data.
{<string1>,<string2>,...,<stringN>}
You can use this brace expansion to iterate over the string list as follows.
for fruit in {apple,orange,lemon}; do
echo $fruit
done
apple
orange
lemon
This expression is also useful to invoke a particular command multiple times with different
parameters.
For example, you can create multiple subdirectories in one shot with:
The most common use case of brace expansion is to define a range of numbers for loop
iteration. For that, you can use the following expressions, where you specify the start/end of
the range, as well as an optional increment value.
Finally, it's possible to combine multiple brace expansions, in which case the
combined expressions will generate all possible combinations of sequence data produced by each
expression.
For example, we have the following script that prints all possible combinations of
two-character alphabet strings using double-loop iteration.
for char1 in {A..Z}; do
for char2 in {A..Z}; do
echo "${char1}${char2}"
done
done
By combining two brace expansions, the following single loop can produce the same output as
above.
for str in {A..Z}{A..Z}; do
echo $str
done
Conclusion
In this tutorial, I described a bash's built-in mechanism called brace expansion, which
allows you to easily generate a sequence of arbitrary strings in a single command line. Brace
expansion is useful not just for a bash script, but also in your command line environment
(e.g., when you need to run the same command multiple times with different arguments). If you
know any useful brace expansion tips and use cases, feel free to share it in the
comment.
In an ideal world, things always work as expected, but you know that's hardly the case. The
same goes in the world of bash scripting. Writing a robust, bug-free bash script is always
challenging even for a seasoned system administrator. Even if you write a perfect bash script,
the script may still go awry due to external factors such as invalid input or network problems.
While you cannot prevent all errors in your bash script, at least you should try to handle
possible error conditions in a more predictable and controlled fashion.
That is easier said than done, especially since error handling in bash is notoriously
difficult. The bash shell does not have any fancy exception swallowing mechanism like try/catch
constructs. Some bash errors may be silently ignored but may have consequences down the line.
The bash shell does not even have a proper debugger.
In this tutorial, I'll introduce basic tips to catch and handle errors in bash . Although
the presented error handling techniques are not as fancy as those available in other
programming languages, hopefully by adopting the practice, you may be able to handle potential
bash errors more gracefully.
As the first line of defense, it is always recommended to check the exit status of a
command, as a non-zero exit status typically indicates some type of error. For example:
if ! some_command; then
echo "some_command returned an error"
fi
Another (more compact) way to trigger error handling based on an exit status is to use an OR
list:
<command1> || <command2>
With this OR statement, <command2> is executed if and only if <command1> returns
a non-zero exit status. So you can replace <command2> with your own error handling
routine. For example:
Bash provides a built-in variable called $? , which tells you the exit status
of the last executed command. Note that when a bash function is called, $? reads
the exit status of the last command called inside the function. Since some non-zero exit codes
have special
meanings , you can handle them selectively. For example:
# run some command
status=$?
if [ $status -eq 1 ]; then
echo "General error"
elif [ $status -eq 2 ]; then
echo "Misuse of shell builtins"
elif [ $status -eq 126 ]; then
echo "Command invoked cannot execute"
elif [ $status -eq 128 ]; then
echo "Invalid argument"
fi
Bash Error Handling Tip #2: Exit on Errors in Bash
When you encounter an error in a bash script, by default, it throws an error message to
stderr , but continues its execution in the rest of the script. In fact you see
the same behavior in a terminal window; even if you type a wrong command by accident, it will
not kill your terminal. You will just see the "command not found" error, but you terminal/bash
session will still remain.
This default shell behavior may not be desirable for some bash script. For example, if your
script contains a critical code block where no error is allowed, you want your script to exit
immediately upon encountering any error inside that code block. To activate this
"exit-on-error" behavior in bash, you can use the set command as follows.
set -e
#
# some critical code block where no error is allowed
#
set +e
Once called with -e option, the set command causes the bash shell
to exit immediately if any subsequent command exits with a non-zero status (caused by an error
condition). The +e option turns the shell back to the default mode. set
-e is equivalent to set -o errexit . Likewise, set +e is a
shorthand command for set +o errexit .
However, one special error condition not captured by set -e is when an error
occurs somewhere inside a pipeline of commands. This is because a pipeline returns a
non-zero status only if the last command in the pipeline fails. Any error produced by
previous command(s) in the pipeline is not visible outside the pipeline, and so does not kill a
bash script. For example:
set -e
true | false | true
echo "This will be printed" # "false" inside the pipeline not detected
If you want any failure in pipelines to also exit a bash script, you need to add -o
pipefail option. For example:
set -o pipefail -e
true | false | true # "false" inside the pipeline detected correctly
echo "This will not be printed"
Therefore, to protect a critical code block against any type of command errors or pipeline
errors, use the following pair of set commands.
set -o pipefail -e
#
# some critical code block where no error or pipeline error is allowed
#
set +o pipefail +e
Bash Error Handling Tip #3: Try and Catch Statements in Bash
Although the set command allows you to terminate a bash script upon any error
that you deem critical, this mechanism is often not sufficient in more complex bash scripts
where different types of errors could happen.
To be able to detect and handle different types of errors/exceptions more flexibly, you will
need try/catch statements, which however are missing in bash. At least we can mimic the
behaviors of try/catch as shown in this trycatch.sh script:
function try()
{
[[ $- = *e* ]]; SAVED_OPT_E=$?
set +e
}
function throw()
{
exit $1
}
function catch()
{
export exception_code=$?
(( $SAVED_OPT_E )) && set +e
return $exception_code
}
Here we define several custom bash functions to mimic the
semantic of try and catch statements. The throw() function is supposed to raise a
custom (non-zero) exception. We need set +e , so that the non-zero returned by
throw() will not terminate a bash script. Inside catch() , we store
the value of exception raised by throw() in a bash variable
exception_code , so that we can handle the exception in a user-defined
fashion.
Perhaps an example bash script will make it clear how trycatch.sh works. See
the example below that utilizes trycatch.sh .
# Include trybatch.sh as a library
source ./trycatch.sh
# Define custom exception types
export ERR_BAD=100
export ERR_WORSE=101
export ERR_CRITICAL=102
try
(
echo "Start of the try block"
# When a command returns a non-zero, a custom exception is raised.
run-command || throw $ERR_BAD
run-command2 || throw $ERR_WORSE
run-command3 || throw $ERR_CRITICAL
# This statement is not reached if there is any exception raised
# inside the try block.
echo "End of the try block"
)
catch || {
case $exception_code in
$ERR_BAD)
echo "This error is bad"
;;
$ERR_WORSE)
echo "This error is worse"
;;
$ERR_CRITICAL)
echo "This error is critical"
;;
*)
echo "Unknown error: $exit_code"
throw $exit_code # re-throw an unhandled exception
;;
esac
}
In this example script, we define three types of custom exceptions. We can choose to raise
any of these exceptions depending on a given error condition. The OR list <command>
|| throw <exception> allows us to invoke throw() function with a
chosen <exception> value as a parameter, if <command> returns a non-zero exit
status. If <command> is completed successfully, throw() function will be
ignored. Once an exception is raised, the raised exception can be handled accordingly inside
the subsequent catch block. As you can see, this provides a more flexible way of handling
different types of error conditions.
Granted, this is not a full-blown try/catch constructs. One limitation of this approach is
that the try block is executed in a sub-shell . As you may know, any
variables defined in a sub-shell are not visible to its parent shell. Also, you cannot modify
the variables that are defined in the parent shell inside the try block, as the
parent shell and the sub-shell have separate scopes for variables.
Conclusion
In this bash tutorial, I presented basic error handling tips that may come in handy when you
want to write a more robust bash script. As expected these tips are not as sophisticated as the
error handling constructs available in other programming language. If the bash script you are
writing requires more advanced error handling than this, perhaps bash is not the right language
for your task. You probably want to turn to other languages such as Python.
Let me conclude the tutorial by mentioning one essential tool that every shell script writer
should be familiar with. ShellCheck is a static analysis tool for shell
scripts. It can detect and point out syntax errors, bad coding practice and possible semantic
issues in a shell script with much clarity. Definitely check it out if you haven't tried
it.
All politics about fake news aside (PLEASE!), I've heard a growing number of reports, sighs
and cries about Fake Agile. It's frustrating when people just don't get it, especially when
they think they do. We can point fingers and vilify those who think differently -- or we can
try to understand why this "us vs them" mindset is splintering the Agile
community....
...Now, let us edit these two files at a time using Vim editor. To do so, run:
$ vim file1.txt file2.txt
Vim will display the contents of the files in an order. The first file's contents will be
shown first and then second file and so on.
Edit Multiple Files Using Vim Editor Switch between files
To move to the next file, type:
:n
Switch between files in Vim editor
To go back to previous file, type:
:N
Here, N is capital (Type SHIFT+n).
Start editing the files as the way you do with Vim editor. Press 'i' to switch to
interactive mode and modify the contents as per your liking. Once done, press ESC to go back to
normal mode.
Vim won't allow you to move to the next file if there are any unsaved changes. To save the
changes in the current file, type:
ZZ
Please note that it is double capital letters ZZ (SHIFT+zz).
To abandon the changes and move to the previous file, type:
:N!
To view the files which are being currently edited, type:
:buffers
View files in buffer in VIm
You will see the list of loaded files at the bottom.
List of files in buffer in Vim
To switch to the next file, type :buffer followed by the buffer number. For example, to
switch to the first file, type:
:buffer 1
Or, just do:
:b 1
Switch to next file in Vim
Just remember these commands to easily switch between buffers:
:bf # Go to first file.
:bl # Go to last file
:bn # Go to next file.
:bp # Go to previous file.
:b number # Go to n'th file (E.g :b 2)
:bw # Close current file.
Opening additional files for editing
We are currently editing two files namely file1.txt, file2.txt. You might want to open
another file named file3.txt for editing. What will you do? It's easy! Just type :e followed by
the file name like below.
:e file3.txt
Open additional files for editing in Vim
Now you can edit file3.txt.
To view how many files are being edited currently, type:
:buffers
View all files in buffers in Vim
Please note that you can not switch between opened files with :e using either :n or :N . To
switch to another file, type :buffer followed by the file buffer number.
Copying contents
of one file into another
You know how to open and edit multiple files at the same time. Sometimes, you might want to
copy the contents of one file into another. It is possible too. Switch to a file of your
choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.
To do so, first switch to file1.txt:
:buffer 1
Place the move cursor in-front of a line that wants to copy and type yy to yank(copy) the
line. Then, move to file2.txt:
:buffer 2
Place the mouse cursor where you want to paste the copied lines from file1.txt and type p .
For example, you want to paste the copied line between line2 and line3. To do so, put the mouse
cursor before line and type p .
Sample output:
line1
line2
ostechnix
line3
line4
line5
Copying contents of one file into another file using Vim
To save the changes made in the current file, type:
ZZ
Again, please note that this is double capital ZZ (SHIFT+z).
To save the changes in all files and exit vim editor. type:
:wq
Similarly, you can copy any line from any file to other files.
Copying entire file
contents into another
We know how to copy a single line. What about the entire file contents? That's also
possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.
To do so, open the file2.txt first:
$ vim file2.txt
If the files are already loaded, you can switch to file2.txt by typing:
:buffer 2
Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to
copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then,
type the following command and hit ENTER key:
:r file1.txt
Copying entire contents of a file into another file
Here, r means read .
Now you will see the contents of file1.txt is pasted after line5 in file2.txt.
line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
Copying entire file contents into another file using Vim
To save the changes in the current file, type:
ZZ
To save all changes in all loaded files and exit vim editor, type:
:wq
Method 2
The another method to open multiple files at once is by using either -o or -O flags.
To open multiple files in horizontal windows, run:
$ vim -o file1.txt file2.txt
Open multiple files at once in Vim
To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the
following shortcuts to move between windows.
CTRL-w k - top window
CTRL-w j - bottom window
To open multiple files in vertical windows, run:
$ vim -O file1.txt file2.txt file3.txt
Open multiple files in vertical windows in Vim
To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the
following shortcuts to move between windows.
CTRL-w l - left window
CTRL-w h - right window
Everything else is same as described in method 1.
For example, to list currently loaded files, run:
:buffers
To switch between files:
:buffer 1
To open an additional file, type:
:e file3.txt
To copy entire contents of a file into another:
:r file1.txt
The only difference in method 2 is once you saved the changes in the current file using ZZ ,
the file will automatically close itself. Also, you need to close the files one by one by
typing :wq . But, had you followed the method 1, when typing :wq all changes will be saved in
all files and all files will be closed at once.
In this case, we are commenting out the lines from 1 to 3. Check the following screenshot.
The lines from 1 to 3 have been commented out.
Comment out multiple lines at once in vim
To uncomment those lines, run:
:1,3s/^#/
Once you're done, unset the line numbers.
:set nonumber
Let us go ahead and see third method.
Method 3:
This one is same as above but slightly different.
Open the file in vim editor.
$ vim ostechnix.txt
Set line numbers:
:set number
Then, type the following command to comment out the lines.
:1,4s/^/# /
The above command will comment out lines from 1 to 4.
Comment out multiple lines in vim
Finally, unset the line numbers by typing the following.
:set nonumber
Method 4:
This method is suggested by one of our reader Mr.Anand Nande in the comment section
below.
Open file in vim editor:
$ vim ostechnix.txt
Press Ctrl+V to enter into 'Visual block' mode and press DOWN arrow to select all the lines
in your file.
Select lines in Vim
Then, press Shift+i to enter INSERT mode (this will place your cursor on the first line).
Press Shift+3 which will insert '#' before your first line.
Insert '#' before the first line in Vim
Finally, press ESC key, and you can now see all lines are commented out.
Comment out multiple lines using vim Method 5:
This method is suggested by one of our Twitter follower and friend Mr.Tim Chase .
We can even target lines to comment out by regex. Open the file in vim editor.
$ vim ostechnix.txt
And type the following:
:g/\Linux/s/^/# /
The above command will comment out all lines that contains the word "Linux".
Comment out all lines that contains a specific word in Vim
And, that's all for now. I hope this helps. If you know any other easier method than the
given methods here, please let me know in the comment section below. I will check and add them
in the guide. Also, have a look at the comment section below. One of our visitor has shared a
good guide about Vim usage.
NUNY3 November 23, 2017 - 8:46 pm
If you want to be productive in Vim you need to talk with Vim with *language* Vim is using.
Every solution that gets out of "normal
mode" is most probably not the most effective.
METHOD 1
Using "normal mode". For example comment first three lines with: I#j.j.
This is strange isn't it, but:
I –> capital I jumps to the beginning of row and gets into insert mode
# –> type actual comment character
–> exit insert mode and gets back to normal mode
j –> move down a line
. –> repeat last command. Last command was: I#
j –> move down a line
. –> repeat last command. Last command was: I#
You get it: After you execute a command, you just repeat j. cobination for the lines you would
like to comment out.
METHOD 2
There is "command line mode" command to execute "normal mode" command.
Example: :%norm I#
Explanation:
% –> whole file (you can also use range if you like: 1,3 to do only for first three
lines).
norm –> (short for normal)
I –> is normal command I that is, jump to the first character in line and execute
insert
# –> insert actual character
You get it, for each range you select, for each of the line normal mode command is executed
METHOD 3
This is the method I love the most, because it uses Vim in the "I am talking to Vim" with Vim
language principle.
This is by using extension (plug-in, add-in): https://github.com/tomtom/tcomment_vim
extension.
How to use it? In NORMAL MODE of course to be efficient. Use: gc+action.
Examples:
gcap –> comment a paragraph
gcj –> comment current line and line bellow
gc3j –> comment current line and 3 lines bellow
gcgg –> comment current line and all the lines including first line in file
gcG –> comment current line and all the lines including last line in file
gcc –> shortcut for comment a current line
You name it it has all sort of combinations. Remember, you have to talk with Vim, to
properly efficially use it.
Yes sure it also works with "visual mode", so you use it like: V select the lines you would
like to mark and execute: gc
You see if I want to impress a friend I am using gc+action combination. Because I always
get: What? How did you do it? My answer it is Vim, you need to talk with the text editor, not
using dummy mouse and repeat actions.
NOTE: Please stop telling people to use DOWN arrow key. Start using h, j, k and l keys to
move around. This keys are on home row of typist. DOWN, UP, LEFT and RIGHT key are bed habit
used by beginners. It is very inefficient. You have to move your hand from home row to arrow
keys.
VERY IMPORTANT: Do you want to get one million dollar tip for using Vim? Start using Vim
like it was designed for use normal mode. Use its language: verbs, nouns, adverbs and
adjectives. Interested what I am talking about? You should be, if you are serious about using
Vim. Read this one million dollar answer on forum:
https://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118
MDEBUSK November 26, 2019 - 7:07 am
I've tried the "boxes" utility with vim and it can be a lot of fun.
The default configuration should just work fine. All you need to to define the backup
directories and backup intervals.
First, let us setup the Root backup directory i.e We need to choose the directory where we
want to store the file system back ups. In our case, I will store the back ups in /rsnapbackup/
directory.
# All snapshots will be stored under this root directory.
#
snapshot_root /rsnapbackup/
Again, you should use TAB key between snapshot_root element and your backup directory.
Scroll down a bit, and make sure the following lines (marked in bold) are uncommented:
[...]
#################################
# EXTERNAL PROGRAM DEPENDENCIES #
#################################
# LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
cmd_cp /usr/bin/cp
# uncomment this to use the rm program instead of the built-in perl routine.
#
cmd_rm /usr/bin/rm
# rsync must be enabled for anything to work. This is the only command that
# must be enabled.
#
cmd_rsync /usr/bin/rsync
# Uncomment this to enable remote ssh backups over rsync.
#
cmd_ssh /usr/bin/ssh
# Comment this out to disable syslog support.
#
cmd_logger /usr/bin/logger
# Uncomment this to specify the path to "du" for disk usage checks.
# If you have an older version of "du", you may also want to check the
# "du_args" parameter below.
#
cmd_du /usr/bin/du
[...]
Next, we need to define the backup intervals:
#########################################
# BACKUP LEVELS / INTERVALS #
# Must be unique and in ascending order #
# e.g. alpha, beta, gamma, etc. #
#########################################
retain alpha 6
retain beta 7
retain gamma 4
#retain delta 3
Here, retain alpha 6 means that every time rsnapshot alpha run, it will make a new snapshot,
rotate the old ones, and retain the most recent six (alpha.0 - alpha.5). You can define your
own intervals. For more details, refer the rsnapshot man pages.
Here, I am going to backup the contents of /root/ostechnix/ directory and save them in
/rsnapbackup/server/ directory. Please note that I didn't specify the full path
(/rsnapbackup/server/ ) in the above configuration. Because, we already mentioned the Root
backup directory earlier.
Likewise, define the your remote client systems backup location.
Here, I am going to backup the contents of my remote client system's /home/sk/test/
directory and save them in /rsnapbackup/client/ directory in my Backup server. Again, please
note that I didn't specify the full path (/rsnapbackup/client/ ) in the above configuration.
Because, we already mentioned the Root backup directory before.
Save and close /ect/rsnapshot.conf file.
Once you have made all your changes, run the following command to verify that the config
file is syntactically valid.
rsnapshot configtest
If all is well, you will see the following output.
Syntax OK
Testing backups
Run the following command to test backups.
rsnapshot alpha
This take a few minutes depending upon the size of back ups.
Verifying backups
Check the whether the backups are really stored in the Root backup directory in the Backup
server.
ls /rsnapbackup/
You will see the following output:
alpha.0
Check the alpha.0 directory:
ls /rsnapbackup/alpha.0/
You will see there are two directories automatically created, one for local backup (server),
and another one for remote systems (client).
client/ server/
Check the client system back ups:
ls /rsnapbackup/alpha.0/client
Check the server system(local system) back ups:
ls /rsnapbackup/alpha.0/server
Automate back ups
You don't/can't run the rsnapshot command to make backup every time. Define a cron job and
automate the backup job.
The first line indicates that there will be six alpha snapshots taken each day (at
0,4,8,12,16, and 20 hours), beta snapshots taken every night at 11:50pm, and delta snapshots
will be taken at 10pm on the first day of each month. You can adjust timing as per your wish.
Save and close the file.
Done! Rsnapshot will automatically take back ups on the defined time in the cron job. For
more details, refer the man pages.
man rsnapshot
That's all for now. Hope this helps. I will soon here with another interesting guide. If you
find this guide useful, please share it on your social, professional networks and support
OSTechNix.
This command will backup the entire root ( / ) directory, excluding /dev, /proc, /sys, /tmp,
/run, /mnt, /media, /lost+found directories, and save the data in /mnt folder.
CYA , stands for C over Y our A ssets, is a free, open source system snapshot and restore
utility for any Unix-like operating systems that uses BASH shell. Cya is portable and supports
many popular filesystems such as EXT2/3/4, XFS, UFS, GPFS, reiserFS, JFS, BtrFS, and ZFS etc.
Please note that Cya will not backup the actual user data . It only backups and restores the
operating system itself. Cya is actually a system restore utility . By default, it will backup
all key directories like /bin/, /lib/, /usr/, /var/ and several others. You can, however,
define your own directories and files path to include in the backup, so Cya will pick those up
as well. Also, it is possible define some directories/files to skip from the backup. For
example, you can skip /var/logs/ if you don't log files. Cya actually uses Rsync backup method
under the hood. However, Cya is little bit easier than Rsync when creating rolling backups.
When restoring your operating system, Cya will rollback the OS using your backup profile
which you created earlier. You can either restore the entire system or any specific directories
only. You can also easily access the backup files even without a complete rollback using your
terminal or file manager. Another notable feature is we can generate a custom recovery script
to automate the mounting of your system partition(s) when you restore off a live CD, USB, or
network image. In a nutshell, CYA can help you to restore your system to previous state when
you end-up with a broken system caused by software update, configuration changes and
intrusions/hacks etc.
... ... ...
Conclusion
Unlike Systemback and other system restore utilities, Cya is not a distribution-specific
restore utility. It supports many Linux operating systems that uses BASH. It is one of the
must-have applications in your arsenal. Install it right away and create snapshots. You won't
regret when you accidentally crashed your Linux system.
The idea was that sharing this would inspire others to improve their bashrc savviness. Take
a look at what our Sudoers group shared and, please, borrow anything you like to make your
sysadmin life easier.
# Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
# Add color, formatting, etc to ls without re-typing a bunch of options every time
alias ll='ls -alhF'
alias ls="ls --color"
# So I don't need to remember the options to tar every time
alias untar='tar xzvf'
alias tarup='tar czvf'
# Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around
alias vim='nvim'
alias vi='nvim'
# Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed
# Example: _cp /etc/dnsmasq.conf _cp()
{
local file="$1"
local st=1
if [[ -f $file ]]; then
cat "$file" | xclip -selection clipboard
st=$?
else
printf '%s\n' "Make sure you are copying the content of a file" >&2
fi
return $st
}
# This is the function to paste the content. The content is now in your buffer.
# Example: _paste
_paste()
{
xclip -selection cliboard -o
}
# Generate a random password without installing any external tooling
genpw()
{
alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
}
# See what command you are using the most (this parses the history command)
cm() {
history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }'
}
alias vim='nvim'
alias l='ls -CF --color=always''
alias cd='cd -P' # follow symlinks
alias gits='git status'
alias gitu='git remote update'
alias gitum='git reset --hard upstream/master'
I don't know who I need to thank for this, some awesome woman on Twitter whose name I no
longer remember, but it's changed the organization of my bash aliases and commands
completely.
I have Ansible drop individual <something>.bashrc files into ~/.bashrc.d/
with any alias or command or shortcut I want, related to any particular technology or Ansible
role, and can manage them all separately per host. It's been the best single trick I've learned
for .bashrc files ever.
Git stuff gets a ~/.bashrc.d/git.bashrc , Kubernetes goes in
~/.bashrc.d/kube.bashrc .
if [ -d ${HOME}/.bashrc.d ]
then
for file in ~/.bashrc.d/*.bashrc
do
source "${file}"
done
fi
These aren't bashrc aliases, but I use them all the time. I wrote a little script named
clean for getting rid of excess lines in files. For example, here's
nsswitch.conf with lots of comments and blank lines:
[pgervase@pgervase etc]$ head authselect/nsswitch.conf
# Generated by authselect on Sun Dec 6 22:12:26 2020
# Do not modify this file manually.
# If you want to make changes to nsswitch.conf please modify
# /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'.
#
# Note that your changes may not be applied as they may be
# overwritten by selected profile. Maps set in the authselect
# profile always take precedence and overwrites the same maps
# set in the user file. Only maps that are not set by the profile
[pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf
80 authselect/nsswitch.conf
[pgervase@pgervase etc]$ clean authselect/nsswitch.conf
passwd: sss files systemd
group: sss files systemd
netgroup: sss files
automount: sss files
services: sss files
shadow: files sss
hosts: files dns myhostname
bootparams: files
ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
publickey: files
aliases: files
[pgervase@pgervase etc]$ cat `which clean`
#! /bin/bash
#
/bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"
In the last week of April, Zoom reported that the number of daily users on its platform
grew
to more than 300 million
,
up from 10 million at the end of 2019.
Wayne Kurtzman, a research director at International Data Corp., said the crisis has accelerated the adoption of videoconferencing
and other collaboration tools by roughly five years.
It has also driven innovation. New features expected in the year ahead include the use of artificial intelligence to enable
real-time transcription and translation, informing people when they were mentioned in a meeting and why, and creating a short
"greatest hits" version of meetings they may have missed, Mr. Kurtzman said.
Many businesses also
ramped
up their use of software bots
, among other forms of automation, to handle routine workplace tasks like data entry and invoice
processing.
The attention focused on keeping operations running saw many companies
pull
back
on some long-running IT modernization efforts, or plans to build out ambitious data analytics and business intelligence
systems.
Bob Parker, a senior vice president for industry research at IDC, said many companies were simply channeling funds to more urgent
needs. But another key obstacle was an inability to access on-site resources to continue pre-Covid initiatives, he said, "especially
for projects requiring significant process re-engineering," such as enterprise resource planning implementations and upgrades.
In this case, you're running the loop with a true condition, which means it will run forever
or until you hit CTRL-C. Therefore, you need to keep an eye on it (otherwise, it will remain
using the system's resources).
Note : If you use a loop like this, you need to include a command like sleep to
give the system some time to breathe between executions. Running anything non-stop could become
a performance issue, especially if the commands inside the loop involve I/O
operations.
2. Waiting for a condition to become true
There are variations of this scenario. For example, you know that at some point, the process
will create a directory, and you are just waiting for that moment to perform other
validations.
You can have a while loop to keep checking for that directory's existence and
only write a message while the directory does not exist.
Another useful application of a while loop is to combine it with the
read command to have access to columns (or fields) quickly from a text file and
perform some actions on them.
In the following example, you are simply picking the columns from a text file with a
predictable format and printing the values that you want to use to populate an
/etc/hosts file.
Here the assumption is that the file has columns delimited by spaces or tabs and that there
are no spaces in the content of the columns. That could shift the content of the fields
and not give you what you needed.
Notice that you're just doing a simple operation to extract and manipulate information and
not concerned about the command's reusability. I would classify this as one of those "quick and
dirty tricks."
Of course, if this was something that you would repeatedly do, you should run it from a
script, use proper names for the variables, and all those good practices (including
transforming the filename in an argument and defining where to send the output, but today, the
topic is while loops).
#!/bin/bash
cat servers.txt | grep -v CPU | while read servername cpu ram ip
do
echo $ip $servername
done
7zip is a wildly popular Windows program that is used to create archives. By default it uses 7z format which it claims is
30-70% better than the normal zip format. It also claims to compress to the regular zip format 2-10% more effectively than
other zip compatible programs. It supports a wide variety of archive formats including (but not limited to) zip, gzip, bzip2,
tar
,
and rar. Linux has had p7zip for a long time. However, this is the first time 7Zip developers have provided native Linux
support.
If navigating a network through IP addresses and hostnames is confusing, or if you don't
like the idea of opening a folder for sharing and forgetting that it's open for perusal, then
you might prefer Snapdrop
. This is an open source project that you can run yourself or use the demonstration instance on
the internet to connect computers through WebRTC. WebRTC enables peer-to-peer connections
through a web browser, meaning that two users on the same network can find each other by
navigating to Snapdrop and then communicate with each other directly, without going through an
external server.
Once two or more clients have contacted a Snapdrop service, users can trade files and chat
messages back and forth, right over the local network. The transfer is fast, and your data
stays local.
When you call
date with +%s option, it shows the current system clock in
seconds since 1970-01-01 00:00:00 UTC. Thus, with this option, you can easily calculate
time difference in seconds between two clock measurements.
start_time=$(date +%s)
# perform a task
end_time=$(date +%s)
# elapsed time with second resolution
elapsed=$(( end_time - start_time ))
Another (preferred) way to measure elapsed time in seconds in bash is to use a built-in bash
variable called SECONDS . When you access SECONDS variable in a bash
shell, it returns the number of seconds that have passed so far since the current shell was
launched. Since this method does not require running the external date command in
a subshell, it is a more elegant solution.
This will display elapsed time in terms of the number of seconds. If you want a more
human-readable format, you can convert $elapsed output as follows.
eval "echo Elapsed time: $(date -ud "@$elapsed" +'$((%s/3600/24)) days %H hr %M min %S sec')"
In Ansible architecture, you have a controller node and managed nodes. Ansible is installed
on only the controller node. It's an agentless tool and doesn't need to be installed on the
managed nodes. Controller and managed nodes are connected using the SSH protocol. All tasks are
written into a "playbook" using the YAML language. Each playbook can contain multiple
plays, which contain tasks , and tasks contain modules . Modules are
reusable standalone scripts that manage some aspect of a system's behavior. Ansible modules are
also known as task plugins or library plugins.
Playbooks for complex tasks can become lengthy and therefore difficult to read and
understand. The solution to this problem is Ansible roles . Using roles, you can break
long playbooks into multiple files making each playbook simple to read and understand. Roles
are a collection of templates, files, variables, modules, and tasks. The primary purpose behind
roles is to reuse Ansible code. DevOps engineers and sysadmins should always try to reuse their
code. An Ansible role can contain multiple playbooks. It can easily reuse code written by
anyone if the role is suitable for a given case. For example, you could write a playbook for
Apache hosting and then reuse this code by changing the content of index.html to
alter options for some other application or service.
The following is an overview of the Ansible role structure. It consists of many
subdirectories, such as:
Initially, all files are created empty by using the ansible-galaxy command. So,
depending on the task, you can use these directories. For example, the vars
directory stores variables. In the tasks directory, you have main.yml
, which is the main playbook. The templates directory is for storing Jinja
templates. The handlers directory is for storing handlers.
Advantages of Ansible roles:
Allow for content reusability
Make large projects manageable
Ansible roles are structured directories containing sub-directories.
But did you know that Red Hat Enterprise Linux also provides some Ansible System Roles to manage operating system
tasks?
System roles
The rhel-system-roles package is available in the Extras (EPEL) channel. The
rhel-system-roles package is used to configure RHEL hosts. There are seven default
rhel-system-roles available:
rhel-system-roles.kdump - This role configures the kdump crash recovery service. Kdump is
a feature of the Linux kernel and is useful when analyzing the cause of a kernel crash.
rhel-system-roles.network - This role is dedicated to network interfaces. This helps to
configure network interfaces in Linux systens.
rhel-system-roles.selinux - This role manages SELinux. This helps to configure the
SELinux mode, files, port-context, etc.
rhel-system-roles.timesync - This role is used to configure NTP or PTP on your Linux
system.
rhel-system-roles.postfix - This role is dedicated to managing the Postfix mail transfer
agent.
rhel-system-roles.firewall - As the name suggests, this role is all about managing the
host system's firewall configuration.
rhel-system-roles.tuned - Tuned is a system tuning service in Linux to monitor connected
devices. So this role is to configure the tuned service for system performance.
The rhel-system-roles package is derived from open source Linux system-roles . This Linux-system-role is
available on Ansible Galaxy. The rhel-system-roles is supported by Red Hat, so you
can think of this as if rhel-system-roles are downstream of Linux system-roles. To
install rhel-system-roles on your machine, use:
This is the default path, so whenever you use playbooks to reference these roles, you don't
need to explicitly include the absolute path. You can also refer to the documentation for using
Ansible roles. The path for the documentation is
/usr/share/doc/rhel-system-roles
The documentation directory for each role has detailed information about that role. For
example, the README.md file is an example of that role, etc. The documentation is
self-explanatory.
The following is an example of a role.
Example
If you want to change the SELinux mode of the localhost machine or any host machine, then
use the system roles. For this task, use rhel-system-roles.selinux
For this task the ansible-playbook looks like this:
---
- name: a playbook for SELinux mode
hosts: localhost
roles:
- rhel-system-roles.selinux
vars:
- selinux_state: disabled
After running the playbook, you can verify whether the SELinux mode changed or not.
Shiwani
Biradar I am an OpenSource Enthusiastic undergraduate girl who is passionate about Linux &
open source technologies. I have knowledge of Linux , DevOps, and cloud. I am also an active
contributor to Fedora. If you didn't find me exploring technologies then you will find me
exploring food! More about me
... Edge
computing is a model of infrastructure design that places many "compute nodes" (a fancy
word for a server ) geographically closer to people who use them most frequently. It can
be part of the open hybrid-cloud model, in which a centralized data center exists to do all the
heavy lifting but is bolstered by smaller regional servers to perform high frequency -- but
usually less demanding -- tasks...
Historically, a computer was a room-sized device hidden away in the bowels of a university
or corporate head office. Client terminals in labs would connect to the computer and make
requests for processing. It was a centralized system with access points scattered around the
premises. As modern networked computing has evolved, this model has been mirrored unexpectedly.
There are centralized data centers to provide serious processing power, with client computers
scattered around so that users can connect. However, the centralized model makes less and less
sense as demands for processing power and speed are ramping up, so the data centers are being
augmented with distributed servers placed on the "edge" of the network, closer to the users who
need them.
The "edge" of a network is partly an imaginary place because network boundaries don't
exactly map to physical space. However, servers can be strategically placed within the
topography of a network to reduce the latency of connecting with them and serve as a buffer to
help mitigate overloading a data center.
... ... ...
While it's not exclusive to Linux, container technology is an
important part of cloud and edge computing. Getting to know Linux and Linux containers
helps you learn to install, modify, and maintain "serverless" applications. As processing
demands increase, it's more important to understand containers, Kubernetes
and KubeEdge , pods,
and other tools that are key to load balancing and reliability.
... ... ...
The cloud is largely a Linux platform. While there are great layers of abstraction, such as
Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit
from a healthy dose of Linux knowledge. The best way to learn it is to use it, and Linux is remarkably easy to
try . Get the edge on Linux so you can get Linux on the edge.
Rather than trying to limit yourself to just one session or remembering what is running on
which screen, you can set a name for the session by using the -S argument:
[root@rhel7dev ~]# screen -S "db upgrade"
[detached from 25778.db upgrade]
[root@rhel7dev ~]# screen -ls
There are screens on:
25778.db upgrade (Detached)
25706.pts-0.rhel7dev (Detached)
25693.pts-0.rhel7dev (Detached)
25665.pts-0.rhel7dev (Detached)
4 Sockets in /var/run/screen/S-root.
[root@rhel7dev ~]# screen -x "db upgrade"
[detached from 25778.db upgrade]
[root@rhel7dev ~]#
To exit a screen session, you can type exit or hit Ctrl+A and then D .
Now that you know how to start, stop, and label screen sessions let's get a
little more in-depth. To split your screen session in half vertically hit Ctrl+A and then the |
key ( Shift+Backslash ). At this point, you'll have your screen session with the prompt on the
left:
Image
To switch to your screen on the right, hit Ctrl+A and then the Tab key. Your cursor is now
in the right session, but there's no prompt. To get a prompt hit Ctrl+A and then C . I can do
this multiple times to get multiple vertical splits to the screen:
Image
You can now toggle back and forth between the two screen panes by using Ctrl+A+Tab .
What happens when you cat out a file that's larger than your console can
display and so some content scrolls past? To scroll back in the buffer, hit Ctrl+A and then Esc
. You'll now be able to use the cursor keys to move around the screen and go back in the
buffer.
There are other options for screen , so to see them, hit Ctrl , then A , then
the question mark :
Further reading can be found in the man page for screen . This article is a
quick introduction to using the screen command so that a disconnected remote
session does not end up killing a process accidentally. Another program that is similar to
screen is tmux and you can read about tmux in this article .
/var directory has filled up and you are
left with with no free disk space available. This is a typical scenario which can be easily
fixed by mounting your /var directory on different partition. Let's get started by
attaching new storage, partitioning and creating a desired file system. The exact steps may
vary and are not part of this config article. Once ready obtain partition UUID of your new var
partition eg. /dev/sdc1:
Reboot your system and you are done. Confirm that everything is working correctly and
optionally remove old var directory by booting to some Live Linux system etc.
I have two drives on my computer that have the following configuration:
Drive 1: 160GB, /home
Drive 2: 40GB, /boot and /
Unfortunately, drive 2 seems to be dying, because trying to write to it is giving me
errors, and checking out the SMART settings shows a sad state of affairs.
I have plenty of space on Drive 1, so what I'd like to do is move the / and /boot
partitions to it, remove Drive 2 from the system, replace Drive 2 with a new drive, then
reverse the process.
I imagine I need to do some updating to grub, and I need to move some things around, but
I'm pretty baffled how to exactly go about this. Since this is my main computer, I want to be
careful not to mess things up so I can't boot. partitioning fstab Share Improve this
question Follow asked Sep 1 '10 at 0:56 mlissner 2,013 2 2 gold badges 22 22 silver
badges 35 35 bronze badges
You'll need to boot from a live cd. Add partitions for them to disk 1, copy all the
contents over, and then use sudo blkid to get the UUID of each partition. On
disk 1's new /, edit the /etc/fstab to use the new UUIDs you just looked up.
Updating GRUB depends on whether it's GRUB1 or GRUB2. If GRUB1, you need to edit
/boot/grub/device.map
If GRUB2, I think you need to mount your partitions as they would be in a real situation.
For example:
sudo mkdir /media/root
sudo mount /dev/sda1 /media/root
sudo mount /dev/sda2 /media/root/boot
sudo mount /dev/sda3 /media/root/home
(Filling in whatever the actual partitions are that you copied things to, of course)
Then bind mount /proc and /dev in the /media/root:
sudo mount -B /proc /media/root/proc
sudo mount -B /dev /media/root/dev
sudo mount -B /sys /media/root/sys
Now chroot into the drive so you can force GRUB to update itself according to the new
layout:
sudo chroot /media/root
sudo update-grub
The second command will make one complaint (I forget what it is though...), but that's ok
to ignore.
Test it by removing the bad drive. If it doesn't work, the bad drive should still be able
to boot the system, but I believe these are all the necessary steps. Share Improve this answer Follow edited Jun 15 '14 at 23:04 Matthew Buckett
105 4 4 bronze badges answered Sep 1 '10 at 6:14 maco 14.4k 3 3 gold badges 27 27 silver badges 35
35 bronze badges
William Mortada ,
FYI to anyone viewing this these days, this does not apply to EFI setups. You need to mount
/media/root/boot/efi , among other things. – wjandrea Sep 10 '16 at 7:54
sBlatt ,
6
If you replace the drive right away you can use dd (tried it on my server
some months ago, and it worked like a charm).
You'll need a boot-CD for this as well.
Start boot-CD
Only mount Drive 1
Run dd if=/dev/sdb1 of=/media/drive1/backuproot.img - sdb1 being your root
( / ) partition. This will save the whole partition in a file.
same for /boot
Power off, replace disk, power on
Run dd if=/media/drive1/backuproot.img of=/dev/sdb1 - write it back.
same for /boot
The above will create 2 partitions with the exact same size as they had before. You might
need to adjust grub (check macos post).
If you want to resize your partitions (as i did):
Create 2 Partitions on the new drive (for / and /boot ; size
whatever you want)
Mount the backup-image: mount /media/drive1/backuproot.img
/media/backuproot/
Mount the empty / partition: mount /dev/sdb1
/media/sdb1/
Copy its contents to the new partition (i'm unsure about this command, it's really
important to preserve ownership, cp -R won't do it!) cp -R
--preserve=all /media/backuproot/* /media/sdb1
It turns out that the new "40GB" drive I'm trying to install is smaller than my current
"40GB" drive. I have both of them connected, and I'm booted into a liveCD. Is there an easy
way to just dd from the old one to the new one, and call it a done deal? – mlissner Sep 4 '10 at 3:02
mlissner ,
6
My final solution to this was a combination of a number of techniques:
I connected the dying drive and its replacement to the computer simultaneously.
The new drive was smaller than the old, so I shrank the partitions on the old using
GParted.
After doing that, I copied the partitions on the old drive, and pasted them on the new
(also using GParted).
Next, I added the boot flag to the correct partition on the new drive, so it was
effectively a mirror of the old drive.
This all worked well, but I needed to update grub2 per the instructions here .
Finally, this solved it for me. I had a Virtualbox disk (vdi file) that I needed to move to a
smaller disk. However Virtualbox does not support shrinking a vdi file, so I had to create a
new virtual disk and copy over the linux installation onto this new disk. I've spent two days
trying to get it to boot. – j.karlsson Dec 19 '19 at 9:48
This document (7018639) is provided subject to the disclaimer at the end of
this document.
Environment SLE 11
SLE 12 Situation The root filesystem needs to be moved to a new disk or partition.
Resolution 1. Use the media to go into rescue mode on the system. This is the safest way
to copy data from the root disk so that it's not changing while we are copying from it. Make
sure the new disk is available.
2. Copy data at the block(a) or filesystem(b) level depending on preference from the old
disk to the new disk. NOTE: If the dd command is not being used to copy data from an entire disk to an entire
disk the partition(s) will need to be created prior to this step on the new disk so that the
data can copied from partition to partition.
a. Here is a dd command for copying at the block level (the disks do not need to be
mounted):
# dd if=/dev/<old root disk> of=/dev/<new root disk> bs=64k conv=noerror,sync
The dd command is not verbose and depending on the size of the disk could take some time to
complete. While it is running the command will look like it is just hanging. If needed, to
verify it is still running, use the ps command on another terminal window to find the dd
command's process ID and use strace to follow that PID and make sure there is activity.
# ps aux | grep dd
# strace -p<process id>
After confirming activity, hit CTRL + c to end the strace command. Once the dd command is
complete the terminal prompt will return allowing for new commands to be run.
b. Alternatively to dd, mount the disks and then use an rsync command for copying at the
filesystem level:
# mount /dev/<old root disk> /mnt
# mkdir /mnt2
(If the new disk's root partition doesn't have a filesystem yet, create it now.)
# mount /dev/<new root disk> /mnt2
# rsync -zahP /mnt/ /mnt2/
This command is much more verbose than dd and there shouldn't be any issues telling that it
is working. This does generally take longer than the dd command.
3. Setting up the partition boot label with either fdisk(a) or parted(b) NOTE: This step can be skipped if the boot partition is separate from the root partition
and has not changed. Also, if dd was used on an entire disk to an entire disk in section
"a" of step 2 you can still skip this step since the partition table will have been copied to
the new disk (If the partitions are not showing as available yet on the new disk run
"partprobe" or enter fdisk and save no changes. ). This exception does not include using dd on
only a partition.
a. Using fdisk to label the new root partition (which contains boot) as bootable.
# fdisk /dev/<new root disk>
From the fdisk shell type 'p' to list and verify the root partition is there.
Command (m for help): p
If the "Boot" column of the root partition does not have an "*" symbol then it needs to be
activated. Type 'a' to toggle the bootable partition flag: Command (m for help): a Partition
number (1-4): <number from output p for root partition>
After that use the 'p' command to verify the bootable flag is now enabled. Finally, save
changes: Command (m for help): w
b. Alternatively to fdisk, use parted to label the new root partition (which contains boot)
as bootable.
# parted /dev/sda
From the parted shell type "print" to list and verify the root partition is there.
(parted) print If the "Flags" column of the root partition doesn't include "boot" then it will
need to be enabled. (parted) set <root partition number> boot on
After that use the "print" command again to verify the flag is now listed for the root
partition. then exit parted to save the changes: (parted) quit
4. Updating Legacy GRUB(a) on SLE11 or GRUB2(b) on SLE12. NOTE: Steps 4 through 6 will need to be done in a chroot environment on the new
root disk. TID7018126 covers how to chroot in rescue mode:
https://www.suse.com/support/kb/doc?id=7018126
a. Updating Legacy GRUB on SLE11
# vim /boot/grub/menu.lst
There are two changes that may need to occur in the menu.lst file. 1. If the contents of
/boot are in the root partition which is being changed, we'll need to update the line "root
(hd#,#)" which points to the disk with the contents of /boot.
Since the sd[a-z] device names are not persistent it's recommended to find the equivalent
/dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name
might be different in chroot than it was before chroot. Run this command to verify the disk
name in chroot: # mount
For this line Grub uses "hd[0-9]" rather than "sd[a-z]" so sda would be hd0 and sdb would be
hd1, and so on. Match to the disk as shown in the mount command within chroot. The partition
number in Legacy Grub also starts at 0. So if it were sda1 it would be hd0,0 and if it were
sdb2 it would be hd1,1. Update that line accordingly.
2. in the line starting with the word "kernel" (generally just below the root line we just
went over) there should be a root=/dev/<old root disk> parameter. That will need to be
updated to match the path and device name of the new root partition.
root=/dev/disk/by-id/<new root partition> Also, if the swap partition was changed to the
new disk you'll need to reflect that with the resume= parameter.
Save and exit after making the above changes as needed.
Next, run this command: # yast2 bootloader
( you may get a warning message about the boot loader. This can be ignored.)
Go to the "Boot Loader Installation" tab with ALT + a. Verify it is set to boot from the
correct partition. For example, if the content of /boot is in the root partition then make sure
it is set to boot from the root partition. Lastly hit ALT + o so that it will save the
configuration. While the YaST2 module is existing it should also install the boot loader.
b Updating GRUB2 on SLE12 # vim /etc/default/grub
The parameter to update is the GRUB_CMDLINE_LINUX_DEFAULT. If there is a "root=/dev/<old
root disk>" parameter update it so that it is "root=/dev/<new root disk>". If there is
no root= parameter in there add it. Each parameter is space separated so make sure there is a
space separating it from the other parameters. Also, if the swap partition was changed to the
new disk you'll need to reflect that with the resume= parameter.
Since the sd[a-z] device names are not persistent it's recommended to find the equivalent
/dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name
might be different in chroot than it was before chroot. Run this command to verify the disk
name in chroot before comparing with by-id or by-path: # mount
It might look something like this afterward:
GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/<partition/disk name>
resume=/dev/disk/by-id/<partition/disk name> splash=silent quiet showopts"
After saving changes to that file run this command to save them to the GRUB2 configuration: #
grub2-mkconfig -o /boot/grub2/grub.cfg (You can ignore any errors about lvmetad during the
output of the above command.)
After that run this command on the disk with the root partition. For example, if the root
partition is sda2 run this command on sda:
# grub2-install /dev/<disk of root partition>
5. Correct the fstab file to match new partition name(s)
# vim /etc/fstab
Correct the root (/) partition mount row in the file so that it points to the new
disk/partition name. If any other partitions were changed they will need to be updated as well.
For example, changed from: /dev/<old root disk> / ext3 defaults 1 1 to:
/dev/disk/by-id/<new root disk> / ext3 defaults 1 1
The 3rd through 6th column may vary from the example. The important aspect is to change the
row that is root (/) on the second column and adjust in particular the first column to reflect
the new root disk/partition. Save and exit after making needed changes.
6. Lastly, run the following command to rebuild the ramdisk to match updated information: #
mkinitrd
7. Exit chroot and reboot the system to test if it will boot using the new disk. Make sure
to adjust the BIOS boot order so that the new disk is prioritized first. Additional
Information The range of environments that can impact the necessary steps to migrate a root
filesystem makes it near impossible to cover every case. Some environments could require tweaks
in the steps needed to make this migration a success. As always in administration, have backups
ready and proceed with caution. Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties
interested in our products and solutions to acquire information, ideas and learn from one
another. Materials are provided for informational, personal or non-commercial use within your
organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
How to move Linux root partition to another drive quickly
Dominik Gacek
Jun 21, 2019 · 4 min read
There's a bunch of information over internet on how to clone the Linux drives or partitions
between other drives and partitions using solution like partclone ,
clonezilla , partimage , dd or similar, and while most
of them are working just fine, they're not always the fastest possible way to achieve the
result.
Today I want to show you another approach that combines most of them, and I am finding it
the easiest and fastest of all.
Assumptions:
You are using GRUB 2 as a boot loader
You have two disks/partitions where a destination one is at least the same size or larger
than the original one.
Let's dive in into action.
Just "dd" it
First thing that we h ave to do, is to create a direct copy of our current root partition
from our source disk into our target one.
Before you start, you have to know what are the device names of your drives, to check on
that type in:
sudo fdisk -l
You should see the list of all the disks and partitions inside your system, along with the
corresponding device names, most probably something like /dev/sdx where the
x will be replaced with proper device letter, in addition to that you'll see all
of the partitions for that device prefixed with partition number, so something like
/dev/sdx1
Based on the partition size, device identifier and the file-system, you can say what
partitions you'll switch your installation from and which one will be the target one.
I am assuming here, that you already have the proper destination partition created, but if
you do not, you can utilize one of the tools like GParted or similar to create it.
Once you'll have those identifiers, let's use dd to create a clone, with
command similar to.
Where /dev/sdx1 is your source partition, and /dev/sdy1 is your
destination one.
It's really important to provide the proper devices into if and of
arguments, cause otherwise you can overwrite your source disk instead!
The above process will take a while and once it's finished you should already be able to
mount your new partition into the system by using two commands:
sudo mkdir /mnt/new
sudo mount /dev/sdy1 /mnt/new
There's also a chance that your device will be mounted automatically but that varies on a
Linux distro of choice.
Once you execute it, if everything went smoothly you should be able to run
ls -l /mnt/new
And as the outcome you should see all the files from the core partition, being stored in the
new location.
It finishes the first and most important part of the operation.
Now the tricky
part
We do have our new partition moved into shiny new drive, but the problem that we have, is
the fact that since they're the direct clones both of the devices will have the same UUIDs and
if we want to load your installation from the new device properly, we'll have to adjust that as
well.
First, execute following command to see the current disk uuid's
blkid
You'll see all of the partitions with the corresponding UUID.
Now, if we want to change it we have to first generate a new one using:
uuidgen
which will generate a brand new UUID for us, then let's copy it result and execute command
similar to:
where in place of /dev/sdy1 you should provide your target partition device
identifier, and in place of -U flag value, you should paste the value generated
from uuidgen command.
Now the last thing to do, is to update our fstab file on new partition so that it'll contain
the proper UUID, to do this, let's edit it with.
sudo vim /etc/fstab
# or nano or whatever editor of choice
you'll see something similar to the code below inside:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc1 during installation
UUID=cd6ecfb1–05e0–4dd7–89e7–8e78dad1fa0e / ext4 errors=remount-ro 0 1
# /home was on /dev/sdc2 during installation
UUID=667f98f4–9db1–415b-b326–65d16c528e29 /home ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=7AA7–10F1 /boot/efi vfat defaults 0 1
The bold part is important for us, so what we want to do, is to paste our new UUID replacing
the current one specified for the / path.
And that's almost it
The last part you have to do is to simply update the grub.
There are a number of options here, for the brave ones you can edit the
/boot/grub/grub.cfg
Another option is to simply reinstall grub into our new drive with command:
sudo grub-install /dev/sdx
And if you do not want to bother with editing or reinstalling grub manually, you can simply
use the tool called grub-customizer to have a simple and easy GUI for all of those
operations.
No
doubt the old spinning hard drives are the main
bottleneck
of
any Linux PC. Overall system responsiveness is highly dependent on storage drive performance.
So, here's how you can clone HDD to SSD without re-installing the existing
Linux
distro
and now be clear about few things.
As you're planning to move your existing
linux installation to a SSD, there's a good chance that the SSD has smaller storage capacity(in GB) than the existing hard
drive.
You don't need to worry about the above, but
you should clear up the existing hard drive as much as possible, I mean
delete
the junks
.
You should at least know what junks to
exclude while copying the files, taking backup of important files is always good.
Here we'll assume that it's not a
dual
boot system
, only linux is installed on the hard drive.
Read this tutorial carefully before actually
cloning to SSD, anyway there's almost no risk of messing things up.
Of
course it's not the only way to clone linux from HDD to SSD, rather it's exactly what I did after buying a SSD for my laptop.
This tutorial should work on every Linux distro with a little modification, depending on which distro you're using, I was
using Ubuntu.
Contents
Hardware setup
As
you're going to copy files from the hard drive to the SSD. So you need to attach the both disk at the same time on your
PC/Laptop.
For desktops, it's easier, as there's always at least 2 SATA ports on the motherboard. You've just have to connect the SSD to
any of the free SATA ports and you're done.
On
laptops it's a bit tricky, as there's no free SATA port. If the laptop has a DVD drive, then you could remove it and use a "
2nd
hard drive caddy
".
It
could be either 9.5 mm or 12.7 mm. Open up your laptop's DVD drive and get a rough measurement.
But if you don't want to play around with your DVD drive or there's no DVD at all, use a
USB
to SATA adapter
.
Preferably a USB 3 adapter for better speed, like
this
one
. However the "caddy" is the best you can do with your laptop.
Try AmazonPrime for free
Enjoy free shipping and One-Day delivery,
cancel any time.
You'll need a bootable USB drive for letter steps, booting any live Linux distro of your choice, I used to Ubuntu.
You could use any method to create it, the
dd
approach
will be the simplest. Here's detailed the tutorials, with
MultiBootUSB
and
here's
bootable
USB with GRUB
.
Create Partitions on the
SSD
After successfully attaching the SSD, you need to partition it according to it's capacity and your choice. My SSD, SAMSUNG 850
EVO was absolutely blank, might be yours too as well. So, I had to create the partition table before creating disk partitions.
Now many question arises, likeWhat kind of partition table? How many partitions? Is there any need of a swap partition?
Well, if your Laptop/PC has a UEFI based BIOS, and want to use the UEFI functionalities, you should use the GPT partition
table.
For a regular desktop use, 2 separate partitions are enough, a
root
partition
and a
home
.
But if you want to boot through UEFI, then you also need to crate a 100 MB or more FAT32 partition.
I
think a 32 GB
root
partition is just enough, but you've to
decide yours depending on future plans. However you can go with as low as 8 GB root partition, if you know what you're doing.
Of
course you don't need a dedicated swap partition, at least what I think. If there's any need of swap in future, you can just
create a swap file.
So, here's how I partitioned the disk. It's formatted with the MBR partition table, a 32 GB
root
partition
and the rest of 256 GB(232.89 GiB) is
home
.
This SSD partitions were created with Gparted on the existing Linux system on the HDD. The SSD was connected to the DVD drive
slot with a "Caddy", showing as
/dev/sdb
here.
Mount the HDD and SSD
partitions
At
the beginning of this step, you need to shutdown your PC and boot to any
live
Linux distro
of your choice from a bootable USB drive.
The purpose of booting to a live linux session is for copying everything from the old
root
partition
in a more cleaner way. I mean why copy unnecessary files or directories under
/dev
,
/proc
,
/sys
,
/var
,
/tmp
?
And of course you know how to boot from a USB drive, so I'm not going to repeat the same thing. After booting to the live
session, you've to mount both the HDD and SSD.
As
I used Ubuntu live, so just opened up the file manager to mount the volumes. At this point you've to be absolutely sure about
which are the old and new
root
and
home
partitions.
And if you didn't had any separate
/home
partition
on the HDD previously, then you've to be careful while copying files. As there could be lots of contents that won't fit inside
the tiny
root
volume of the SSD in this case.
Finally if you don't want to use any graphical tool like file managers to mount the disk partition, then it's even better. An
example
below,
only commands, not much explanation.
sudo -i # after booting to the live session
mkdir -p /mnt/{root1,root2,home1,home2} # Create the directories
mount /dev/sdb1 /mnt/root1/ # mount the root partitions
mount /dev/sdc1 /mnt/root2/
mount /dev/sdb2 /mnt/home1/ # mount the home partitions
mount /dev/sdc2 /mnt/home2/
Copy contents from
the HDD to SSD
In
this step, we'll be using the
rsync
command
to clone HDD to SSD while
preserving proper file permissions
.
And we'll assume that the all partitions are mounter like below.
Old root partition of the hard drive mounted
on
/media/ubuntu/root/
Old home partition of the hard drive on
/media/ubuntu/home/
New root partition of the SSD, on
/media/ubuntu/root1/
New home partition of the SSD mounted on
/media/ubuntu/home1/
Actually in my case, both the root and home partitions were labelled as root and home, so udisk2 created the mount directories
like above.
Note:
Most
probably your mount points are different.
Don't
just copy
paste the commands below, modify them according to your system and requirements.
You can also see the transfer progress, that's helpful.
The copying process will take about
10 minutes
or so to complete, depending on the
size of it's contents.
Note:
If
there was no separate home partition on your previous installation and there's not enough space in the SSD's root partition,
exclude the
/home
directory.
Now copy the contents of one home partition to another, and this is a bit tricky of your SSD is smaller in size than the HDD.
You've to use the
--exclude
flag
with
rsync
to exclude certain large files or folders.
So, here for an
example
, I wanted to exclude few excessively
large folders.
Excluding files and folders with rsync is bit sketchy, the source folder is the starting point of any file or directory path.
Make sure that the exclude path is properly implemented.
Hope you've got the point, for a proper HDD to SSD cloning in linux, copy the contents of the HDD's root partition to the new
SSD's root partition. And do the the same thing for the home partition too.
Install GRUB
bootloader on the SSD
The SSD won't boot until there's a properly configured bootloader. And there's a very good chance that you'were using GRUB as
a boot loader.
So, to install GRUB, we've to
chroot
on
the root partition of the SSD and install it from there. Before that be sure about which device under the
/dev
directory
is your SSD. In my case, it was
/dev/sdb
.
Note:
You
can just copy the first 512 byte from the HDD and dump it to the SSD, but I'm not going that way this time.
So, first step is chrooting, here's all the commands below, running all of then as super user.
sudo -i # login as super user
mount -o bind /dev/ /media/ubuntu/root1/dev/
mount -o bind /dev/pts/ /media/ubuntu/root1/dev/pts/
mount -o bind /sys/ /media/ubuntu/root1/sys/
mount -o bind /proc/ /media/ubuntu/root1/proc/
chroot /media/ubuntu/root1/
After successfully chrooting to the SSD's root partition, install GRUB. And there's also a catch, if you want to use a UEFI
compatible GRUB, then it's another long path. But we'll be installing the legacy BIOS version of the GRUB here.
If
GRUB is installed without any problem, then update the configuration file.
update-grub
These two commands above are to be run inside the chroot, and don't exit from the chroot now. Here's the detailed
GRUB
rescue
tutorial, both for legacy BIOS and UEFI systems.
Update the fstab entry
You've to properly update the fstab entry to properly mount the filesystems while booting.
Use the
blkid
command
to know the proper UUID of the partitions.
Now open up the
/etc/fstab
file
with your favorite text editor and add the proper
root
and
home
UUID
at proper locations.
nano /etc/fstab
The
above is the final fstab entry from my laptops Ubuntu installation.
Shutdown and boot from
the SSD
If
you were using a USB to SATA converter to do all the above steps, then it's time to connect the SSD to a SATA port.
For desktops it's not a problem, just connect the SSD to any of it's available SATA port. But many laptop refuses to boot if
the DVD drive is replaced with a SSD or HDD. So, in that case, remove the hard drive and slip the SSD in it's place.
After doing all the hardware stuff, it's better to check if the SSD is recognized by the BIOS/UEFI at all. Hit the BIOS setup
button while powering it up, and check all the disks.
If
the SSD is detected, then set it as the default boot device. Save all the changes to BIOS/UEFI and hit the power button again.
Now it's the moment of truth, if HDD to SSD cloning was done right, then Linux should boot. It will boot much faster than
previous, you can check that with the
systemd-analyze
command.
Conclusion
As
said before it's neither the only way nor the perfect, but was pretty simple for me.I got the idea from openwrt extroot setup,
but previously used the
squashfs
tools instead of rsync.
It
took around 20 minute to clone my HDD to SSD. But, well writing this tutorial took around 15X more time of that.
Hope I'll be able to add the GRUB installation process for UEFI based systems to this tutorial soon, stay tuned !
Also please don't forget to share your thoughts and suggestions on the comment section.
Your comments
Sh3l
says
December 21, 2020
Hello,
It seems you haven't gotten around writing that UEFI based article yet. But right now I really need the steps necessary
to clone hdd to ssd in UEFI based system. Can you please let me know how to do it?
Reply
Create an extra UEFI partition, along with root and home partitions, FAT32, 100 to 200 MB, install GRUB in UEFI mode,
it should boot.
Commands should be like this -
mount
/dev/sda2 /boot/efi
grub-install /dev/sda --target=x86_64-efi
Then edit the grub.cfg file under /boot/grub/ , you're good to go.
If it's not booting try GRUB rescue, boot and install grub from there.
Reply
Pronay
Guha
says
November 9, 2020
I'm already using Kubuntu 20.04, and now I'm trying to add an SSD to my laptop. It is running windows alongside. I want
the data to be there but instead of using HDD, the Kubuntu OS should use SSD. How to do it?
Reply
none
says
May 23, 2020
Can you explain what to do if the original HDD has Swap and you don't want it on the SSD?
Thanks.
Reply
You can ignore the Swap partition, as it's not essential for booting.
Edit the /etc/fstab file, and use a swap file instead.
Reply
none
says
May 21, 2020
A couple of problems:
In one section you mount homeS and rootS as root1 root2 home1 home2 but in the next sectionS you call them root root1
home home1
In the blkid image sda is SSD and sdb is HDD but you said in the previous paragraph that sdb is your SSD
Thanks for the guide
Reply
The first portion is just an example, not the actual commands.
There's some confusing paragraphs and formatting error, I agree.
Reply
oybek
says
April 21, 2020
Thank you very much for the article
Yesterday moved linux from hdd to ssd without any problem
Brilliant article
Reply
Pronay
Guha
says
November 9, 2020
hey, I'm trying to move Linux from HDD to SSD with windows as a dual boot option.
What changes should I do?
Reply
Passingby
says
March 25, 2020
Thank you for your article. It was very helpful. But i see one disadvantage. When you copy like cp -a
/media/ubuntu/root/ /media/ubuntu/root1/ In root1 will be created root folder, but not all its content separately
without folder. To avoid this you must add (*) after /
It should be looked like cp -a /media/ubuntu/root/* /media/ubuntu/root1/ For my opinion rsync command is much more
better. You see like files copping. And when i used cp, i did not understand the process hanged up or not.
Reply
Thanks for pointing out the typo.
Yeas, rsync is better.
Reply
David
Keith
says
December 8, 2018
Just a quick note: rsync, scp, cp etc. all seem to have a file size limitation of approximately 100GB. So this tutorial
will work well with the average filesystem, but will bomb repeatedly if the file size is extremely large.
Reply
oldunixguy
says
June 23, 2018
Question: If one doesn't need to exclude anything why not use "cp -a" instead of rsync?
Question: You say "use a UEFI compatible GRUB, then it's another long path" but you don't tell us how to do this for
UEFI. How do we do it?
Reply
You're most welcome, truly I don't know how to respond such a praise. Thanks!
Reply
Emmanuel
says
February 3, 2018
Far the best tutorial I've found "quickly" searching DuckDuckGo. Planning to migrate my system on early 2018. Thank you!
I now visualize quite clearly the different steps I'll have to adapt and pass through. it also stick to the KISS* thank
you again, the time you invested is very useful, at least for me!
Author:
Vivek
Gite
Last updated:
March
14, 2006
58
comments
/dev/shm
is nothing but implementation of
traditional
shared memory
concept. It is an
efficient means of passing data between programs. One program will create a memory portion, which other processes (if
permitted) can access. This will result into speeding up things on Linux.
shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating
systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage
device.
If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all
files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If
you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:
You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On
heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your
Linux host's performance (i.e. improve the performance of your virtual machines).
In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this
example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only
accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
Where,
-o opt1,opt2
: Pass various options with a -o
flag followed by a comma separated string of options. In this examples, I used the following options:
remount
: Attempt to remount an
already-mounted filesystem. In this example, remount the system and increase its size.
size=8G or size=5G
: Override default
maximum size of the /dev/shm filesystem. he size is given in bytes, and rounded up to entire pages. The default is half
of the memory. The size parameter also accepts a suffix % to limit this tmpfs instance to that percentage of your
pysical RAM: the default, when neither size nor nr_blocks is specified, is size=50%. In this example it is set to 8GiB
or 5GiB. The tmpfs mount options for sizing ( size, nr_blocks, and nr_inodes) accept a suffix k, m or g for Ki, Mi, Gi
(binary kilo, mega and giga) and can be changed on remount.
nr_inodes=5k
: The maximum number of
inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem)
the number of lowmem RAM pages, whichever is the lower.
mode=700
: Set initial permissions of the
root directory.
tmpfs
: Tmpfs is a file system which keeps
all files in virtual memory.
How do I restrict or modify size of /dev/shm permanently?
You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root
user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G
none /dev/shm tmpfs defaults,size=8G 0 0
Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h
The root user's home directory is /root. I would like to relocate this, and any other user's
home directories to a new location, perhaps on sda9. How do I go about this? debian user-management linux ShareImprove this question
Follow asked Nov 30 '10 at 17:27 nicholas.alipaz 155 2 2 silver badges
7 7 bronze badges
Do you need to have /root on a separate partition, or would it be enough to simply copy
the contents somewhere else and set up a symbolic link? (Disclaimer: I've never tried this,
but it should work.) – SmallClanger Nov 30 '10 at 17:31
You should avoid symlinks, it can make nasty bugs to appear... one day. And very hard to
debug.
Use mount --bind :
# as root
cp -a /root /home/
echo "" >> /etc/fstab
echo "/home/root /root none defaults,bind 0 0" >> /etc/fstab
# do it now
cd / ; mv /root /root.old; mkdir /root; mount -a
it will be made at every reboots which you should do now if you want to catch errors soon
Share Improve this answer Follow
answered Nov 30 '10 at 17:51 shellholic 1,257 8 8 silver badges 11 11
bronze badges
1 You're welcome. But remember moving /root is a bad practice. Perhaps you
could change a bit and make /home/bigrootfiles and mount/link it to some
directory inside /root . If your "big files" are for some service. The best
practice on Debian is to put them in /var/lib/somename – shellholic Nov 30 '10 at
18:40
1 I see. Ultimately root login should not be used IMO. I guess I still might forgo moving
/root entirely since it is not really very good to do. I just need to setup some new sudoer
users with directories on the right partition and setup keyed authentication for better
security. That would be the best solution I think. – nicholas.alipaz Nov 30 '10 at
18:42
Perhaps make a new question describing the purpose of your case and you could come with
great answers. – shellholic Nov 30 '10 at 18:45
https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html
Report this ad 1
Never tried it, but you shouldn't have a problem with: cd / to make sure you're not in the directory to be moved mv /root /home/root ln -s /home/root /root symlink it back to the original location. Share Improve this answer Follow answered
Nov 30 '10 at 17:32 James L 5,645 1 1 gold badge 17 17 silver
badges 23 23 bronze badges Add a
comment 0
booting from a live cd is unfortunately not an option for a remote server, which this is
the case here. – nicholas.alipaz Nov 30 '10 at
17:54
I think that worked in the past - if you do update-grub and grub-install at the end.
However, with debian 10 grub sends me back to have my old partition as the root. –
user855443 Jun 11
'20 at 22:10
The dmesg command is used to print the kernel's message buffer. This is another
important command that you cannot work without. It is much easier to troubleshoot a system when
you can see what is going on, and what happened behind the scenes.
Another example from real life: You are troubleshooting an issue and find out that one file
system is at 100 percent of its capacity.
There may be many subdirectories and files in production, so you may have to come up with
some way to classify the "worst directories" because the problem (or solution) could be in one
or more.
In the next example, I will show a very simple scenario to illustrate the point.
We go to the file system where the disk space is low (I used my home directory as an
example).
Then, we use the command df -k * to show the sizes of directories in
kilobytes.
That requires some classification for us to find the big ones, but just sort
is not enough because, by default, this command will not treat the numbers as values but just
characters.
We add -n to the sort command, which now shows us the biggest
directories.
In case we have to navigate to many other directories, creating an alias
might be useful.
Linux users should immediately patch a serious vulnerability to the sudo
command that, if exploited, can allow unprivileged users gain root privileges on the host
machine.
Called Baron Samedit, the flaw has been "hiding in plain sight" for about 10 years, and was
discovered earlier this month by researchers at Qualys and reported to sudo developers, who
came up with patches Jan. 19, according to
a Qualys blog . (The blog includes a video of the flaw being exploited.)
A new version of sudo -- sudo v1.9.5p2 -- has been created to patch the
problem, and notifications have been posted for many Linux distros including Debian, Fedora,
Gentoo, Ubuntu, and SUSE, according to Qualys.
According to the common vulnerabilities and exposures (CVE) description of Baron Samedit (
CVE-2021-3156 ), the flaw can
be exploited "via 'sudoedit -s' and a command-line argument that ends with a single backslash
character."
According to Qualys, the flaw was introduced in July 2011 and affects legacy versions from
1.8.2 to 1.8.31p2 as well as default configurations of versions from 1.9.0 to 1.9.5p1.
Some data sources present unique logging challenges, leaving organizations vulnerable to
attack. Here's how to navigate each one to reduce risk and increase visibility.
$ colordiff attendance-2020 attendance-2021
10,12c10
< Monroe Landry
< Jonathan Moody
< Donnell Moore
---
< Sandra Henry-Stocker
If you add a -u option, those lines that are included in both files will appear in your
normal font color.
wdiff
The wdiff command uses a different strategy. It highlights the lines that are only in the
first or second files using special characters. Those surrounded by square brackets are only in
the first file. Those surrounded by braces are only in the second file.
$ wdiff attendance-2020 attendance-2021
Alfreda Branch
Hans Burris
Felix Burt
Ray Campos
Juliet Chan
Denver Cunningham
Tristan Day
Kent Farmer
Terrie Harrington
[-Monroe Landry <== lines in file 1 start
Jonathon Moody
Donnell Moore-] <== lines only in file 1 stop
{+Sandra Henry-Stocker+} <== line only in file 2
Leanne Park
Alfredo Potter
Felipe Rush
vimdiff
The vimdiff command takes an entirely different approach. It uses the vim editor to open the
files in a side-by-side fashion. It then highlights the lines that are different using
background colors and allows you to edit the two files and save each of them separately.
Unlike the commands described above, it runs on the desktop, not in a terminal
window.
This webinar will discuss key trends and strategies, identified by Forrester Research, for
digital CX and customer self-service in 2021 and beyond. Register now
On Debian systems, you can install vimdiff with this command:
$ sudo apt install vim
vimdiff.jpg <=====================
kompare
The kompare command, like vimdifff , runs on your desktop. It displays differences between
files to be viewed and merged and is often used by programmers to see and manage differences in
their code. It can compare files or folders. It's also quite customizable.
The kdiff3 tool allows you to compare up to three files and not only see the differences
highlighted, but merge the files as you see fit. This tool is often used to manage changes and
updates in program code.
Like vimdiff and kompare , kdiff3 runs on the desktop.
You can find more information on kdiff3 at sourceforge .
Tags provide an easy way to associate strings that look like hash tags (e.g., #HOME ) with
commands that you run on the command line. Once a tag is established, you can rerun the
associated command without having to retype it. Instead, you simply type the tag. The idea is
to use tags that are easy to remember for commands that are complex or bothersome to
retype.
Unlike setting up an alias, tags are associated with your command history. For this reason,
they only remain available if you keep using them. Once you stop using a tag, it will slowly
disappear from your command history file. Of course, for most of us, that means we can type 500
or 1,000 commands before this happens. So, tags are a good way to rerun commands that are going
to be useful for some period of time, but not for those that you want to have available
permanently.
To set up a tag, type a command and then add your tag at the end of it. The tag must start
with a # sign and should be followed immediately by a string of letters. This keeps the tag
from being treated as part of the command itself. Instead, it's handled as a comment but is
still included in your command history file. Here's a very simple and not particularly useful
example:
$ history | grep TAG
998 08/11/20 08:28:29 echo "I like tags" #TAG <==
999 08/11/20 08:28:34 history | grep TAG
Afterwards, you can rerun the echo command shown by entering !? followed by the tag.
$ !? #TAG
echo "I like tags" #TAG
"I like tags"
The point is that you will likely only want to do this when the command you want to run
repeatedly is so complex that it's hard to remember or just annoying to type repeatedly. To
list your most recently updated files, for example, you might use a tag #REC (for "recent") and
associate it with the appropriate ls command. The command below lists files in your home
directory regardless of where you are currently positioned in the file system, lists them in
reverse date order, and displays only the five most recently created or changed files.
$ ls -ltr ~ | tail -5 #REC <== Associate the tag with a command
drwxrwxr-x 2 shs shs 4096 Oct 26 06:13 PNGs
-rw-rw-r-- 1 shs shs 21 Oct 27 16:26 answers
-rwx------ 1 shs shs 644 Oct 29 17:29 update_user
-rw-rw-r-- 1 shs shs 242528 Nov 1 15:54 my.log
-rw-rw-r-- 1 shs shs 266296 Nov 5 18:39 political_map.jpg
$ !? #REC <== Run the command that the tag is associated with
ls -ltr ~ | tail -5 #REC
drwxrwxr-x 2 shs shs 4096 Oct 26 06:13 PNGs
-rw-rw-r-- 1 shs shs 21 Oct 27 16:26 answers
-rwx------ 1 shs shs 644 Oct 29 17:29 update_user
-rw-rw-r-- 1 shs shs 242528 Nov 1 15:54 my.log
-rw-rw-r-- 1 shs shs 266296 Nov 5 18:39 political_map.jpg
You can also rerun tagged commands using Ctrl-r (hold Ctrl key and press the "r" key) and
then typing your tag (e.g., #REC). In fact, if you are only using one tag, just typing # after
Ctrl-r should bring it up for you. The Ctrl-r sequence, like !? , searches through your command
history for the string that you enter.
Tagging locations
Some people use tags to remember particular file system locations, making it easier to
return to directories they"re working in without having to type complete directory
paths.
Some data sources present unique logging challenges, leaving organizations vulnerable to
attack. Here's how to navigate each one to reduce risk and increase visibility.
$ cd /apps/data/stats/2020/11 #NOV
$ cat stats
$ cd
!? #NOV <== takes you back to /apps/data/stats/2020/11
After using the #NOV tag as shown, whenever you need to move into the directory associated
with #NOV , you have a quick way to do so – and one that doesn't require that you think
too much about where the data files are stored.
NOTE: Tags don't need to be in all uppercase letters, though this makes them easier to
recognize and unlikely to conflict with any commands or file names that are also in your
command history.
Alternatives to tags
While tags can be very useful, there are other ways to do the same things that you can do
with them.
To make commands easily repeatable, assign them to aliases.
As the status quo of security inverts from the data center to the user, Cloud Access
Security Brokers and Secure Web Gateways increasingly will be the same conversation, not
separate technology...
$ alias recent="ls -ltr ~ | tail -5"
To make multiple commands easily repeatable, turn them into a script.
To make file system locations easier to navigate to, create symbolic links.
$ ln -s /apps/data/stats/2020/11 NOV
To rerun recently used commands, use the up arrow key to back up through your command
history until you reach the command you want to reuse and then press the enter key.
You can also rerun recent commands by typing something like "history | tail -20" and then
type "!" following by the number to the left of the command you want to rerun (e.g.,
!999).
Wrap-up
Tags are most useful when you need to run complex commands again and again in a limited
timeframe. They're easy to set up and they fade away when you stop using them.
One easy way to reuse a previously entered command (one that's still on your command
history) is to type the beginning of the command. If the bottom of your history buffers looks
like this, you could rerun the ps command that's used to count system processes simply by
typing just !p .
Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how
to solve the challenges.
$ history | tail -7
1002 21/02/21 18:24:25 alias
1003 21/02/21 18:25:37 history | more
1004 21/02/21 18:33:45 ps -ef | grep systemd | wc -l
1005 21/02/21 18:33:54 ls
1006 21/02/21 18:34:16 echo "What's next?"
You can also rerun a command by entering a string that was included anywhere within it. For
example, you could rerun the ps command shown in the listing above by typing !?sys? The
question marks act as string delimiters.
$ !?sys?
ps -ef | grep systemd | wc -l
5
You could rerun the command shown in the listing above by typing !1004 but this would be
more trouble if you're not looking at a listing of recent commands.
Run previous commands
with changes
After the ps command shown above, you could count kworker processes instead of systemd
processes by typing ^systemd^kworker^ . This replaces one process name with the other and runs
the altered command. As you can see in the commands below, this string substitution allows you
to reuse commands when they differ only a little.
The pandemic of 2020 threw business into disarray, but provided opportunities to accelerate
remote work, collaboration, and digital transformation
$ sudo ls -l /var/log/samba/corse
ls: cannot access '/var/log/samba/corse': No such file or directory
$ ^se^es^
sudo ls -l /var/log/samba/cores
total 8
drwx -- -- -- . 2 root root 4096 Feb 16 10:50 nmbd
drwx -- -- -- . 2 root root 4096 Feb 16 10:50 smbd
Reach back into history
You can also reuse commands with a character string that asks, for example, to rerun the
command you entered some number of commands earlier. Entering !-11 would rerun the command you
typed 11 commands earlier. In the output below, the !-3 reruns the first of the three earlier
commands displayed.
$ ps -ef | wc -l
132
$ who
shs pts/0 2021-02-21 18:19 (192.168.0.2)
$ date
Sun 21 Feb 2021 06:59:09 PM EST
$ !-3
ps -ef | wc -l
133
Reuse command arguments
Another thing you can do with your command history is reuse arguments that you provided to
various commands. For example, the character sequence !:1 represents the first argument
provided to the most recently run command, !:2 the second, !:3 the third and so on. !:$
represents the final argument. In this example, the arguments are reversed in the second echo
command.
$ echo be the light
be the light
$ echo !:3 !:2 !:1
echo light the be
light the be
$ echo !:3 !:$
echo light light
light light
If you want to run a series of commands using the same argument, you could do something like
this:
$ echo nemo
nemo
$ id !:1
id nemo
uid=1001(nemo) gid=1001(nemo) groups=1001(nemo),16(fish),27(sudo)
$ df -k /home/!:$
df -k /home/nemo
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 446885824 83472864 340642736 20% /home
Of course, if the argument was a long and complicated string, it might actually save you
some time and trouble to use this technique. Please remember this is just an
example!
Wrap-Up
Simple history command tricks can often save you a lot of trouble by allowing you to reuse
rather than retype previously entered commands. Remember, however, that using strings to
identify commands will recall only the most recent use of that string and that you can only
rerun commands in this way if they are being saved in your history buffer.
Join the Network
World communities on Facebook and LinkedIn to comment on topics that are top
of mind.
Shadow IT has been presented as a new threat to IT departments because of the cloud. Not
true -- the cloud has simply made it easier for non-IT personnel to acquire and create their
own solutions without waiting for IT's permission. Moreover, the cloud has made this means of
technical problem-solving more visible, bringing shadow IT into the light. In fact, "shadow IT"
is more of a legacy pejorative for what should better be labeled "DIY IT." After all, shadow IT
has always been about people solving their own problems with technology.
Here we take a look at how your organization can best go about leveraging the upside of DIY
IT.
What sends non-IT problem-solvers into the shadows
The IT department is simply too busy, overworked, understaffed, underutilized, and sometimes
even too disinterested to take on every marketing Web application idea or mobile app initiative
for field work that comes its way. There are too many strategic initiatives, mission-critical
systems, and standards committee meetings, so folks outside IT are often left with little
recourse but to invent their own solutions using whatever technical means and expertise they
have or can find.
How can this be a bad thing?
They are sharing critical, private data with the wrong people somehow.
Their data is fundamentally flawed, inaccurate, or out of date.
Their data would be of use to many others, but they don't know it exists.
Their ability to solve their own problems is a threat to IT.
Because shadow IT practitioners are subject matter experts in their domain, the second
drawback is unlikely. The third is an opportunity lost, but that's not scary enough to sweat.
The first and fourth are the most likely to instill fear -- with good reason. If something goes
wrong with a home-grown shadow IT solution, the IT department will likely be made responsible,
even if you didn't know it existed.
The Forrester Wave™ Endpoint Detection & Response, Q1 2020 Report
Sunburst On-Demand Attack Simulation
On-Demand Attack Simulation
The wrong response to these fears is to try to eradicate shadow IT. Because if you really
want to wipe out shadow IT, you would have to have access to all the network logs, corporate
credit card reports, phone bills, ISP bills, and firewall logs, and it would take some effort
to identify and block all unauthorized traffic in and out of the corporate network. You would
have to rig the network to refuse to connect to unsanctioned devices, as well as block access
to websites and cloud services like Gmail, Dropbox, Salesforce, Google apps, Trello, and so on.
Simply knowing all you would have to block access to would be a job in itself.
Worse, if you clamp down on DIY solutions you become an obstacle, and attempts to solve
departmental problems will submerge even further into the shadows -- but it will never go away.
The business needs underlying DIY IT are too important.
The reality is, if you shift your strategy to embrace DIY solutions the right way, people
would be able to safely solve their own problems without too much IT involvement and IT would
be able to accomplish more for the projects where its expertise and oversight is truly
critical.
Embrace DIY IT
Seek out shadow IT projects and help them, but above all respect the fact that this
problem-solving technique exists. The folks who launch a DIY project are not your enemies; they
are your co-workers, trying to solve their own problems, hampered by limited resources and
understanding. The IT department may not have many more resources to spread around, but you
have an abundance of technical know-how. Sharing that does not deplete it.
You can find the trail of shadow IT by looking at network logs, scanning email traffic and
attachments, and so forth. You must be willing to support these activities, even if you do
not like them . Whether or not you like them, they exist, and they likely have good reasons
for existing. It doesn't matter if they were not done with your permission or to your
specifications. Assume that they are necessary and help them do it right.
See how SecureX turns security from a blocker into an enabler.
Take the lead -- and
lead
IT departments have the expertise to help others select the right technical solution for
their needs. I'm not talking about RFPs, vendor/product evaluation meetings, software selection
committees -- those are typically time-wasting, ivory-tower circuses that satisfy no one. I'm
talking about helping colleagues figure out what it is they truly want and teaching them how to
evaluate and select a solution that works for them -- and is compliant with a small set of
minimal, relevant standards and policies.
That expertise could be of enormous benefit to the rest of the company, if only it was
shared. An approachable IT department that places a priority on helping people solve their own
problems -- instead of expending enormous effort trying to prevent largely unlikely, possibly
even imaginary problems -- is what you should be striving for.
Think of it as being helpful without being intrusive. Sharing your expertise and taking the
lead in helping non-IT departments help themselves not only shows consideration for your
colleagues' needs, but it also helps solve real problems for real people -- while keeping the
IT department informed about the technology choices made throughout the organization. Moreover,
it sets up the IT department for success instead of surprises when the inevitable integration
and data migration requests appear.
Plus, it's a heck of a lot cheaper than reinventing the wheel unnecessarily.
Create
policies everyone can live with
IT is responsible for critical policies concerning the use of devices, networks, access to
information, and so on. It is imperative that IT have in place a sane set of policies to
safeguard the company from loss, liability, leakage, incomplete/inaccurate data, and security
threats both internal and external. But everyone else has to live with these policies, too. If
they are too onerous or convoluted or byzantine, they will be ignored.
Therefore, create policies that respect everyone's concerns and needs, not IT's alone.
Here's the central question to ask yourself: Are you protecting the company or merely the
status quo?
Security is a legitimate concern, of course, but most SaaS vendors understand security at
least as well as you do, if not better. Being involved in the DIY procurement process (without
being a bottleneck or a dictator) lets you ensure that minimal security criteria are met.
Data integrity is likewise a legitimate concern, but control of company data is largely an
illusion. You can make data available or not, but you cannot control how it is used once
accessed. Train and trust your people, and verify their activities. You should not and cannot
make all decisions for them in advance.
Regulatory compliance, auditing, traceability, and so on are legitimate concerns, but
they do not trump the rights of workers to solve their own problems. All major companies
in similar fields are subject to the same regulations as your company. How you choose to comply
with those regulations is up to you. The way you've always done it is not the only way, and
it's probably not even the best way. Here, investigating what the major players in your field
do, especially if they are more modern, efficient, and "cloudy" than you, is a great start.
The simplest way to manage compliance is to isolate the affected software from the rest of
the system, since compliance is more about auditing and accountability than proscriptive
processes. The major movers and shakers in the Internet space are all over new technologies,
techniques, employee empowerment, and streamlining initiatives. Join them, or eat their
dust.
Champion DIY IT
Once you have a sensible set of policies in place, it's high time to shine a light on shadow
IT -- a celebratory spotlight, that is.
By championing DIY IT projects, you send a clear message that co-workers have no need to
hide how they go about solving their problems. Make your intentions friendly and clear up
front: that you are intent on improving business operations, recognizing and rewarding
innovators and risk-takers, finding and helping those who need assistance, and promoting good
practices for DIY IT. A short memo/email announcing this from a trusted, well-regarded
executive is highly recommended.
Here are a few other ideas in helping you embrace DIY IT:
Establish and publicize "office hours" for free consultations to help guide people toward
better, technically informed choices. Offer advice, publish research, make recommendations,
and help any way you can.
Offer platform services to make it easier for co-workers to get the cloud resources they
need while providing known, safe environments for them to use.
Ask people what software or systems they're using -- a simple survey or email can reveal
a lot. Offer a checklist of software you think people might or should be using with a few
blanks for services not listed to get the conversations started. Encourage people to track
what they really use for a day or a week. Let them know you are looking for existing
solutions to enhance and support, not searching for "contraband."
Examine your internal server/email traffic. Are there patterns or spikes of large
documents or long-running connections? Investigate the source of these, and help them
optimize. For example, if design engineers routinely email each other gigantic design
documents every Wednesday, provide them with a secure shared drive to use instead. Follow
the bandwidth to get to the source -- and help them work better.
Examine your support load for patterns, such as an uptick in calls for new software or
unsupported/unrecognized software, or a severe downturn in calls for old software. This may
indicate that an older, problematic system has been surreptitiously replaced.
Publicize and praise prior DIY IT projects, recognize and reward their creators, and
share their results and techniques with other departments. To spread successful practices,
provide proof of your good intentions and publicize the benefits of reasonable IT efforts
outside the IT department. Make it known, with strong executive support, that you want these
projects to succeed and you want the fruits of these labors to be recognized, applauded, and
shared for the betterment of the company.
Ask everyone, systematically, what they've done and/or need help with. Don't ask, "Are
you doing shady shadow IT things?" The answer will be no, of course not. Instead ask, "How
can we help you eliminate or simplify repetitive, mind-numbing activities?"
If possible, provide a roving high-caliber but small team of IT and devops specialists to
make "house calls" to help people get set up, fix problems, and improve DIY IT projects. This
will help the projects succeed while remaining compliant with balanced IT policies. Plus,
being visibly proactive and helpful is good for public relations.
When warranted, embed a small IT team within a business unit to help them solve their
larger problems, and share that learning with the rest of the company.
DIY IT can be a great benefit to your organization by relieving the load on the IT
department and enabling more people to tap technical tools to be more productive in their work
-- a win for everyone. But it can't happen without sane and balanced policies, active support
from IT, and a companywide awareness that this sort of innovation and initiative is valued.
The most solid tip is not to take this self-review seriously ;-)
And contrary to Anthony Critilli opinion this is not about "selling yourself". This is about
management control of the workforce. In other words, annual performance reviews this is a
mechanism for repression.
Use of corporate bullsh*t is probably the simplest and the most advisable strategy during
those exercises. I like the recommendation "Tie your accomplishments to business goals and
values" below. Never be frank in such situations.
... you sell yourself by reminding your management team that you provide a great deal of
objective value to the organization and that you deserve to be compensated accordingly. When
I say compensation , I don't just mean salary. Compensation means different things to
different people: Maybe you really want more pay, extra vacation time, a promotion, or even a
lateral move. A well-written self-review can help you achieve these goals, assuming they are
available at your current employer.
... ... ...
Tie your accomplishments to business goals and values
...It's hard to argue that decreasing user downtime from days to hours isn't a valuable
contribution.
... ... ...
... I select a skill, technology, or area of an environment that I am weak in, and I
discuss how I would like to build my knowledge. I might discuss how I want to improve my
understanding of Kubernetes as we begin to adopt a containerization strategy, or I might
describe how my on-call effectiveness could be improved by deepening my knowledge of a
particular legacy environment.
... ... ...
Many of my friends and colleagues don't look forward to review season. They find it
distracting and difficult to write a self-review. Often, they don't even know where to begin
writing about their work from the previous year.
AutoKey is an open source
Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever
managed without. It can be a transformative tool to improve your productivity or simply a way
to reduce the physical stress associated with typing.
This article will look at how to install and start using AutoKey, cover some simple recipes
you can immediately use in your workflow, and explore some of the advanced features that
AutoKey power users may find attractive.
Install and set up AutoKey
AutoKey is available as a software package on many Linux distributions. The project's
installation
guide contains directions for many platforms, including building from source. This article
uses Fedora as the operating platform.
AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is
QT -based.
You can install either variant from the command line:
sudo dnf install autokey-gtk
Once it's installed, run it by using autokey-gtk (or autokey-qt
).
Explore the interface
Before you set AutoKey to run in the background and automatically perform actions, you will
first want to configure it. Bring up the configuration user interface (UI):
autokey-gtk -c
AutoKey comes preconfigured with some examples. You may wish to leave them while you're
getting familiar with the UI, but you can delete them if you wish.
The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are
text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic
equivalents that can be written using Python and achieve basically the same result of making
the keyboard send keystrokes to an active window.
The right pane is where the phrases and scripts are built and configured.
Once you're happy with your configuration, you'll probably want to run AutoKey automatically
when you log in so that you don't have to start it up every time. You can configure this in the
Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login
.
Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently
type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems
for you.
Create a new subfolder where you can group all your "typo correction" configurations. Select
My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos
.
Create a new phrase in File -> New -> Phrase , and call it "grep."
Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering
"grep" in the Enter phrase contents section (replacing the default "Enter phrase contents"
text).
Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set
button next to Abbreviations at the bottom of the UI.
In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation.
Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed
occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked
so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt
to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated
word.
It is now possible to limit yum to install only security updates (as opposed to
bug fixes or enhancements) using Red Hat Enterprise Linux 5,6, and 7. To do so, simply install
the yum-security plugin:
For Red Hat Enterprise Linux 7 and 8
The plugin is already a part of yum itself, no need to install anything.
For Red Hat Enterprise Linux 5 and 6
# yum install yum-security
To list all available erratas without installing them, run:
NOTE: It will install the last version available of any package with at least one
security errata thus can install non-security erratas if they provide a more updated version of
the package.
To only install the packages that have a security errata use
If you are interested to see if a given CVE, or list of CVEs are applicable, you can use
this method:
1) get the list of all applicable CVEs from Red
Hat you wish,
- If you wanted to limit the search to a specific rpm such as "openssl", then at that above
Red Hat link, you can enter "openssl" and filter out only openssl items, or filter against
any other search term
- Place these into a file, one line after another, such as this limited example:
NOTE : These CVEs below are from limiting the CVEs to "openssl" in the manner I described
above, and the list is not completed, there are plenty more for your date range
.
What's annoying is that "yum update --security" shows 20 packages to update for security
but when listing the installable errata in Satellite it shows 102 errata available and yet
all those errata don't contain the errata.
You might hit https://bugzilla.redhat.com/show_bug.cgi?id=1408508 where metadata generated
has empty package list for some errata in some circumstances, causing yum thinks
such an errata is not applicable (as no package would be updated by applying that
errata).
I recommend finding out one of the errata that Sat WebUI offers but yum isnt
aware of, and (z)grep that errata id within yum cache - if there
will be something like:
I've got an interesting requirement in that a customer wants to only allow updates of
packages with attached security errata (to limit unecessary drift/update of the OS platform).
ie. restrict, warn or block the use of generic 'yum update' by an admin as it will update
all packages.
There are other approaches which I have currently implemented, including limiting what is
made available to the servers through Satellite so yum update doesn't 'see' non security
errata.. but I guess what i'm really interested in is limiting (through client config) the
inadvertant use "yum update" by an administrator, or redirecting/mapping 'yum update' to 'yum
update --security'. I appreciate an admin can work around any restriction, but it's really to
limit accidental use of full 'yum update' by well intentioned admins.
Current approaches are to alias yum, move yum and write a shim in its place (to
warn/redirect if yum update is called), or patch the yum package itself (which i'd like to
avoid). Any other suggestions appreciated.
why not creating a specific content-view for security patch purpose ?
In that content-view, you create a filter that filters only security updates.
In your patch management process, you can create a script that change on the fly the
content-view of a host (or host-group) then apply security patches, and finally switching
back to the original content-view (if you let to the admin the possibility to install
additional programms if necessary).
If it's a kernel update, you will have to. For other packages, it's recommended as to ensure
that you are not still running the old libraries in memory. If you are just patching one
particular independent service (ie, http), you can probably get away without a full system
reboot.
The primary difference between converged infrastructure (CI) and
hyper-converged infrastructure is that in HCI, both the storage area network and the underlying
storage abstractions are implemented virtually in software (at or via the hypervisor) rather
than physically, in hardware. [ citation needed ]
Because all of the software-defined elements are implemented within the context of the
hypervisor, management of all resources can be federated (shared)
across all instances of a hyper-converged infrastructure. Expected benefits [
edit ]
Hyperconvergence evolves away from discrete, hardware-defined systems that are connected and
packaged together toward a purely software-defined environment where all functional elements
run on commercial, off-the-shelf (COTS) servers, with the convergence of elements enabled by a
hypervisor. [1][2] HCI
infrastructures are usually made up of server systems equipped with Direct-Attached Storage (DAS) .
[3] HCI
includes the ability to plug and play into a data-center pool of like systems. [4][5] All
physical data-center resources reside on a single administrative platform for both hardware and
software layers. [6]
Consolidation of all functional elements at the hypervisor level, together with federated
management, eliminates traditional data-center inefficiencies and reduces the total cost of
ownership (TCO) for data centers. [7][
need
quotation to verify ][8][9]
The potential impact of the hyper-converged infrastructure is that companies will no longer
need to rely on different compute and storage systems, though it is still too early to prove
that it can replace storage arrays in all market segments. [10] It
is likely to further simplify management and increase resource-utilization rates where it does
apply. [11][12][13]
Using the journalctl utility of systemd, you can query these logs, perform various operations on them. For example, viewing the
log files from different boots, check for last warnings, errors from a specific process or applications. If you are unaware of these,
I would suggest you quickly go through this tutorial
"use journalctl to View and Analyze
Systemd Logs [With Examples] " before you follow this guide.
Where are the physical journal log files?
The systemd's journald daemon collects logs from every boot. That means, it classifies the log files as per the boot.
The logs are stored as binary in the path /var/log/journal with a folder as machine id.
For example:
Screenshot of physical journal file -1
Screenshot of physical journal files -2
Also, remember that based on system configuration, runtime journal files are stored at /run/log/journal/ . And these
are removed in each boot.
Can I manually delete the log files?
You can, but don't do it. Instead, follow the below instructions to clear the log files to free up disk space using journalctl
utilities.
How much disk space is used by systemd log files?
Open up a terminal and run the below command.
journalctl --disk-usage
This should provide you how much is actually used by the log files in your system.
If you have a graphical desktop environment, you can open the file manager and browse to the path /var/log/journal
and check the properties.
systemd journal clean process
The effective way of clearing the log files should be done by journald.conf configuration file. Ideally, you should
not manually delete the log files even if the journalctl provides utility to do that.
Let's take a look at how you can delete it
manually , then I will explain
the configuration changes in journald.conf so that you do not need to manually delete the files from time to time; Instead,
the systemd takes care of it
automatically
based on your configuration.
Manual delete
First, you have to flush and rotate the log files. Rotating is a way of marking the current active log
files as an archive and create a fresh logfile from this moment. The flush switch asks the journal daemon to flush any log data stored
in /run/log/journal/ into /var/log/journal/ , if persistent storage is enabled.
Then, after flush and rotate, you need to run journalctl with vacuum-size , vacuum-time , and
vacuum-files switches to force systemd to clear the logs.
Example 1:
sudo journalctl --flush --rotate
sudo journalctl --vacuum-time=1s
The above set of commands removes all archived journal log files until the last second. This effectively clears everything. So,
careful while running the command.
journal clean up example
After clean up:
After clean up journal space usage
You can also provide the following suffixes as per your need following the number.
s: seconds
m: minutes
h: hours
days
months
weeks
years
Example 2:
sudo journalctl --flush --rotate
sudo journalctl --vacuum-size=400M
This clears all archived journal log files and retains the last 400MB files. Remember this switch applies to only archived log
files only, not on active journal files. You can also use suffixes as below.
K: KB
M: MB
G: GB
Example 3:
sudo journalctl --flush --rotate
sudo journalctl --vacuum-files=2
The vacuum-files switch clears all the journal files below the number specified. So, in the above example, only the last 2 journal
files are kept and everything else is removed. Again, this only works on the archived files.
You can combine the switches if you want, but I would recommend not to. However, make sure to run with --rotate switch
first.
Automatic delete using config files
While the above methods are good and easy to use, but it is recommended that you control the journal log file cleanup process
using the journald configuration files which present at /etc/systemd/journald.conf .
The systemd provides many parameters for you to effectively manage the log files. By combining these parameters you can effectively
limit the disk space used by the journal files. Let's take a look.
journald.conf parameter
Description
Example
SystemMaxUse
Specifies the maximum disk space that can be used by the journal in persistent storage
SystemMaxUse=500M
SystemKeepFree
Specifies the amount of space that the journal should leave free when adding journal entries to persistent storage.
SystemKeepFree=100M
SystemMaxFileSize
Controls how large individual journal files can grow to in persistent storage before being rotated.
SystemMaxFileSize=100M
RuntimeMaxUse
Specifies the maximum disk space that can be used in volatile storage (within the /run filesystem).
RuntimeMaxUse=100M
RuntimeKeepFree
Specifies the amount of space to be set aside for other uses when writing data to volatile storage (within the /run filesystem).
RuntimeMaxUse=100M
RuntimeMaxFileSize
Specifies the amount of space that an individual journal file can take up in volatile storage (within the /run filesystem)
before being rotated.
RuntimeMaxFileSize=200M
If you add these values in a running system in /etc/systemd/journald.conf file, then you have to restart the journald
after updating the file. To restart use the following command.
sudo systemctl restart systemd-journald
Verification of log files
It is wiser to check the integrity of the log files after you clean up the files. To do that run the below command. The command
shows the PASS, FAIL against the journal file.
Elastic Stack , commonly abbreviated as ELK , is a popular three-in-one log centralization, parsing, and visualization tool that
centralizes large sets of data and logs from multiple servers into one server.
ELK stack comprises 3 different products:
Logstash
Logstash is a free and open-source
data pipeline that collects logs and events data and even processes and transforms the data to the desired output. Data is sent to
logstash from remote servers using agents called ' beats '. The ' beats ' ship a huge volume of system metrics and logs to Logstash
whereupon they are processed. It then feeds the data to Elasticsearch .
Elasticsearch
Built on Apache Lucene , Elasticsearch is an open-source
and distributed search and analytics engine for nearly all types of data both structured and unstructured. This includes textual,
numerical, and geospatial data.
It was first released in 2010. Elasticsearch is the central component of the ELK stack and is renowned for its speed, scalability,
and REST APIs. It stores, indexes, and analyzes huge volumes of data passed on from Logstash .
Kibana
Data is finally passed on to Kibana , which is a WebUI visualization
platform that runs alongside Elasticsearch . Kibana allows you to explore and visualize time-series data and logs from elasticsearch.
It visualizes data and logs on intuitive dashboards which take various forms such as bar graphs, pie charts, histograms, etc.
Graylog is yet another popular and powerful centralized log management
tool that comes with both open-source and enterprise plans. It accepts data from clients installed on multiple nodes and, just like
Kibana , visualizes the data on dashboards on a web interface.
Graylogs plays a monumental role in making business decisions touching on user interaction of a web application. It collects vital
analytics on the apps' behavior and visualizes the data on various graphs such as bar graphs, pie charts, and histograms to mention
a few. The data collected inform key business decisions.
For example, you can determine peak hours when customers place orders using your web application. With such insights in hand,
the management can make informed business decisions to scale up revenue.
Unlike Elastic Search , Graylog offers a single-application solution in data collection, parsing, and visualization. It rids the
need for installation of multiple components unlike in ELK stack where you have to install individual components separately. Graylog
collects and stores data in MongoDB which is then visualized on user-friendly and intuitive dashboards.
Graylog is widely used by developers in different phases of app deployment in tracking the state of web applications and obtaining
information such as request times, errors, etc. This helps them to modify the code and boost performance.
3. Fluentd
Written in C, Fluentd is a cross-platform and opensource log monitoring
tool that unifies log and data collection from multiple data sources. It's completely opensource and licensed under the Apache 2.0
license. In addition, there's a subscription model for enterprise use.
Fluentd processes both structured and semi-structured sets of data. It analyzes application logs, events logs, clickstreams and
aims to be a unifying layer between log inputs and outputs of varying types.
It structures data in a JSON format allowing it to seamlessly unify all facets of data logging including the collection, filtering,
parsing, and outputting logs across multiple nodes.
Fluentd comes with a small footprint and is resource-friendly, so you won't have to worry about running out of memory or your
CPU being overutilized. Additionally, it boasts of a flexible plugin architecture where users can take advantage of over 500 community-developed
plugins to extend its functionality.
4. LOGalyze
LOGalyze is a powerful
network monitoring and log management
tool that collects and parses logs from network devices, Linux, and Windows hosts. It was initially commercial but is now completely
free to download and install without any limitations.
LOGalyze is ideal for analyzing server and application logs and presents them in various report formats such as PDF, CSV, and
HTML. It also provides extensive search capabilities and real-time event detection of services across multiple nodes.
Like the aforementioned log monitoring tools, LOGalyze also provides a neat and simple web interface that allows users to log
in and monitor various data sources and
analyze log files .
5. NXlog
NXlog is yet another powerful and versatile tool for log collection and centralization.
It's a multi-platform log management utility that is tailored to pick up policy breaches, identify security risks and analyze issues
in system, application, and server logs.
NXlog has the capability of collating events logs from numerous endpoints in varying formats including Syslog and windows event
logs. It can perform a range of log related tasks such as log rotation, log rewrites. log compression and can also be configured
to send alerts.
You can download NXlog in two editions: The community edition, which is free to download, and use, and the enterprise edition
which is subscription-based.
"... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
To block adding a command to the history entries, you can place a
space before the command, as long as you have ignorespace in your
HISTCONTROL environment variable:
$ history | tail
535 echo "foo"
536 echo "bar"
$ history -d 536
$ history | tail
535 echo "foo"
You can clear your entire session history with the -c option:
$ history
-c
$ history
$ History lessons More on Bash
Manipulating history is usually less dangerous than it sounds, especially when you're
curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's
often best to use your session history to record your commands because, by slotting them into
your history, you're running them and thereby testing the process. Very often, documenting
without doing leads to overlooking small steps or writing minor details wrong.
Use your history sessions as needed, and exercise your power over history wisely. Happy
history hacking!
As soon as I log into a server, the first thing I do is check whether it has the operating
system, kernel, and hardware architecture needed for the tests I will be running. I often check
how long a server has been up and running. While this does not matter very much for a test
system because it will be rebooted multiple times, I still find this information helpful.
Use the following commands to get this information. I mostly use Red Hat Linux for testing,
so if you are using another Linux distro, use *-release in the filename instead of
redhat-release :
cat / etc / redhat-release
uname -a
hostnamectl
uptime 2. Is anyone else on board?
Once I know that the machine meets my test needs, I need to ensure no one else is logged
into the system at the same time running their own tests. Although it is highly unlikely, given
that the provisioning system takes care of this for me, it's still good to check once in a
while -- especially if it's my first time logging into a server. I also check whether there are
other users (other than root) who can access the system.
Use the following commands to find this information. The last command looks for users in the
/etc/passwd file who have shell access; it skips other services in the file that
do not have shell access or have a shell set to nologin :
who
who -Hu
grep sh $ / etc / passwd 3. Physical or virtual machine
Now that I know I have the machine to myself, I need to identify whether it's a physical
machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I
have what I asked for. However, if you are using a machine that you did not provision, you
should check whether the machine is physical or virtual.
Use the following commands to identify this information. If it's a physical system, you will
see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a
virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization
software was used to create the VM:
dmidecode -s system-manufacturer
dmidecode -s system-product-name
lshw -c system | grep product | head -1
cat / sys / class / dmi / id / product_name
cat / sys / class / dmi / id / sys_vendor 4. Hardware
Because I often test hardware connected to the Linux machine, I usually work with physical
servers, not VMs. On a physical machine, my next step is to identify the server's hardware
capabilities -- for example, what kind of CPU is running, how many cores does it have, which
flags are enabled, and how much memory is available for running tests. If I am running network
tests, I check the type and capacity of the Ethernet or other network devices connected to the
server.
Use the following commands to display the hardware connected to a Linux server. Some of the
commands might be deprecated in newer operating system versions, but you can still install them
from yum repos or switch to their equivalent new commands:
lscpu or cat / proc / cpuinfo
lsmem or cat / proc / meminfo
ifconfig -a
ethtool < devname >
lshw
lspci
dmidecode 5. Installed software
Testing software always requires installing additional dependent packages, libraries, etc.
However, before I install anything, I check what is already installed (including what version
it is), as well as which repos are configured, so I know where the software comes from, and I
can debug any package installation issues.
Use the following commands to identify what software is installed:
Once I check the installed software, it's natural to check what processes are running on the
system. This is crucial when running a performance test on a system -- if a running process,
daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that
process before running the tests. This also checks that the processes or daemons the test
requires are up and running. For example, if the tests require httpd to be running, the service
to start the daemon might not have run even if the package is installed.
Use the following commands to identify running processes and enabled services on your
system:
Today's machines are heavily networked, and they need to communicate with other machines or
services on the network. I identify which ports are open on the server, if there are any
connections from the network to the test machine, if a firewall is enabled, and if so, is it
blocking any ports, and which DNS servers the machine talks to.
Use the following commands to identify network services-related information. If a deprecated
command is not available, install it from a yum repo or use the equivalent newer
command:
When doing systems testing, I find it helpful to know kernel-related information, such as
the kernel version and which kernel modules are loaded. I also list any tunable
kernel parameters and what they are set to and check the options used when booting the
running kernel.
Use the following commands to identify this information:
If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command
shell on most modern GNU/Linux distributions.
The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and
provides you with the building blocks for shell scripting and automated task execution.
Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can
save you a fair bit of typing. This document outlines 10 of the most useful tools:
Easily recall previous commands
Bash keeps track of the commands you execute in a history buffer, and allows you
to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed
commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command
history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching
commands.
Use command aliases
If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate
the required options, so that you don't need to remember them or manually type them every time. For example, if you always run
ls with the -l option to obtain a detailed directory listing, you can use this command:
bash> alias ls='ls -l'
To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt
will invoke the alias and produce the ls -l output.
You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.
Use filename auto-completion
Bash supports filename auto-completion at the command prompt. To use this feature, type
the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories
in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you.
If multiple matches are found, you will be prompted to choose one.
Use key shortcuts to efficiently edit the command line
Bash supports a number of keyboard shortcuts for command-line
navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut
moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the
Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.
Get automatic notification of new mail
You can configure bash to automatically notify you of new mail, by setting
the $MAILPATH variable to point to your local mail spool. For example, the command:
Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.
Run tasks in the background
Bash lets you run one or more tasks in the background, and selectively suspend or resume
any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's
an example:
bash> tail -f /var/log/messages &
[1] 614
Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to
the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the
foreground. Here's an example:
bash> fg 1
A list of active jobs can be obtained at any time by typing jobs at the bash prompt.
Quickly jump to frequently-used directories
You probably already know that the $PATH variable lists bash's "search
path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports
the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this
feature, assign a directory list to the $CDPATH variable, as shown in the example below:
Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory
name.
Perform calculations
Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply
type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will
attempt to perform the calculation and return the answer.
bash> echo $((16/2))
8
Customise the shell prompt
You can customise the bash shell prompt to display -- among other things -- the current
username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable,
as below:
This will display the name of the currently logged-in user, the host name, the current working directory and the current time
at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.
Get context-specific help
Bash comes with help for all built-in commands. To see a list of all built-in commands,
type help. To obtain help on a specific command, type help command, where command is the command you need help on.
Here's an example:
bash> help alias
...some help text...
Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.
convert2rhel is an RPM package which contains a Python2.x script written in completely
incomprehensible over-modulazed manner. Python obscurantism in action ;-)
Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
Ensure that you have an access to RHEL packages through custom repositories configured
in the /etc/yum.repos.d/ directory and pointing, for example, to RHEL ISO , FTP, or
HTTP. Note that the OS will be converted to the version of RHEL provided by these
repositories. Make sure that the RHEL minor version is the same or later than the original
OS minor version to prevent downgrading and potential conversion failures. See
instructions on how to configure a repository .
Recommended: Update packages from the original OS to the latest version that is
available in the repositories accessible from the system, and restart the
system:
Without performing this step, the rollback feature will not work
correctly, and exiting the conversion in any phase may result in a dysfunctional
system.
IMPORTANT:
Before starting the conversion process, back up your system.
NOTE: Packages that are available only in the original distribution and do not have
corresponding counterparts in RHEL repositories, or third-party packages, which
originate neither from the original Linux distribution nor from RHEL, are left
unchanged.
Before Convert2RHEL starts replacing packages from the original
distribution with RHEL packages, the following warning message is
displayed:
The tool allows rollback of any action until this point.
By continuing all further changes on the system will need to be reverted manually by the user, if necessary.
Changes made by Convert2RHEL up to this point can be automatically
reverted. Confirm that you wish to proceed with the conversion process.
Wait until Convert2RHEL installs the RHEL packages.
NOTE: After a successful conversion, the utility prints out the
convert2rhel command with all arguments necessary for running
non-interactively. You can copy the command and use it on systems with a similar
setup.
At this point, the system still runs with the original distribution kernel loaded in
RAM. Reboot the system to boot into the newly installed RHEL kernel.
Remove third-party packages from the original OS that remained unchanged (typically
packages that do not have a RHEL counterpart). To get a list of such packages,
use:
# yum list extras --disablerepo="*" --enablerepo=<RHEL_RepoID>
If necessary, reconfigure system services after the conversion.
TroubleshootingLogs
The Convert2RHEL utility stores the convert2rhel.log file in
the /var/log/convert2rhel/ directory. Its content is identical to what is
printed to the standard output.
The output of the rpm -Va command, which is run automatically unless the
--no-rpm-va option is used, is stored in the
/var/log/convert2rhel/rpm_va.log file for debugging purposes.
The Link to "instructions on how to configure a repository." is not working (404).
Also it would be great if the tool installs the repos that are needed for the conversion
itself.
Thanks, Stefan, for pointing that out. Before we fix that, you can use this link:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-yum#sec-Setting_repository_Options
Regarding the second point of yours - this article explains how to use convert2rhel
with custom repositories. Since Red Hat does not have the RHEL repositories public, we
leave it up to the user where they obtain the RHEL repositories. For example, when they
have a subscribed RHEL system in their company, they can create a mirror of the RHEL
repositories available on that system by following this guide:
https://access.redhat.com/solutions/23016.
However, convert2rhel is also able to connect to Red Hat Subscription Management
(RHSM), and for that you need to provide the subscription-manager package and pass the
subscription credentials to convert2rhel. Then the convert2rhel chooses the right
repository to use for the conversion. You can find the step by step guide for that in
https://www.redhat.com/en/blog/converting-centos-rhel-convert2rhel-and-satellite.
We are working on improving the user experience related to the use of RHSM.
It might surprise you to know that if you
forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during
installation, your Red Hat-based system will boot with the NIC disconnected:
Image
Setting the NIC to the ON position during installation.
More Linux resources
But, don't worry, in this article I'll
show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.
If your NIC isn't enabled at startup, you
have to edit the
/etc/sysconfig/network-scripts/ifcfg-NIC_name
file,
where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc.
List your network devices and their IP addresses with the
ip
addr
command:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
Note that my primary NIC (enp0s3) has no
assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual
machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration
file:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
and change the
ONBOOT="no"
entry
to
ONBOOT="yes"
as
shown below:
You don't need to reboot to start the NIC,
but after you make this change, the primary NIC will be on and connected upon all subsequent boots.
To enable the NIC, use the
ifup
command:
ifup enp0s3
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Now the
ip
addr
command displays the enp0s3 device with an IP address:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
valid_lft 86266sec preferred_lft 86266sec
inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute
valid_lft 3467sec preferred_lft 3467sec
inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
To disable a NIC, use the
ifdown
command.
Please note that issuing this command from a remote system will terminate your session:
ifdown enp0s3
Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
That's a wrap
It's frustrating to encounter a Linux
system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to
the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how
to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with
troubleshooting steps, try the
ifup
command
to see if that's your easy fix.
When you press a machine's power button, the boot process starts with a hardware-dependent
mechanism that loads a bootloader . The bootloader software finds the kernel on the disk
and boots it. Next, the kernel mounts the root filesystem and executes an init
process.
This process sounds simple, and it might be what actually happens on some Linux systems.
However, modern Linux distributions have to support a vast set of use cases for which this
procedure is not adequate.
First, the root filesystem could be on a device that requires a specific driver. Before
trying to mount the filesystem, the right kernel module must be inserted into the running
kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a
userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root
filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require
configured IP addresses and routes on a network interface.
To overcome these issues, the bootloader can pass to the kernel a small filesystem image
(the initrd) that contains scripts and tools to find and mount the real root filesystem. Once
this is done, the initrd switches to the real root, and the boot continues as usual.
The
dracut infrastructure
On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs
infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and
files from an installed system and combining it with the dracut framework, usually found in
/usr/lib/dracut/modules.d ."
A note on terminology: Sometimes, the names initrd and initramfs are used
interchangeably. They actually refer to different ways of building the image. An initrd is an
image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An
initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs.
Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the
initrd name is still used to indicate the boot process involving a temporary
filesystem.
Kernel command-line
Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via
NFS is to use a kernel command-line containing the root=dhcp argument.
The kernel command-line is a list of options passed to the kernel from the bootloader,
accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e
key on a boot entry and editing the line starting with linux .
The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all
interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease,
dracut configures the interface with the parameters received (IP address and routes); it also
extracts the value of the root-path DHCP option from the lease. The option carries an NFS
server's address and path (which could be, for example, 192.168.50.1:/nfs/client
). Dracut then mounts the NFS share at this location and proceeds with the boot.
If there is no DHCP server providing the address and the NFS root path, the values can be
configured explicitly in the command line:
The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for
static configuration or a combination of automatic and static. Here some examples:
Note that if you pass an ip= option, but dracut doesn't need networking to
mount the root filesystem, the option is ignored. To force network configuration without a
network root, add rd.neednet=1 to the command line.
You probably noticed that among automatic configuration methods, there is also ibft .
iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI
devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems
Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is
outside the scope of this article. What is important is that by passing ip=ibft to
the kernel, the network configuration is retrieved from the firmware.
Dracut also supports adding custom routes, specifying the machine name and DNS servers,
creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more
details.
Network modules
The dracut framework included in the initramfs has a modular architecture. It comprises a
series of modules, each containing scripts and binaries to provide specific functionality. You
can see which modules are available to be included in the initramfs with the command
dracut --list-modules .
At the moment, there are two modules to configure the network: network-legacy
and network-manager . You might wonder why different modules provide the same
functionality.
network-legacy is older and uses shell scripts calling utilities like
iproute2 , dhclient , and arping to configure
interfaces. After the switch to the real root, a different network configuration service runs.
This service is not aware of what the network-legacy module intended to do and the
current state of each interface. This can lead to problems maintaining the state across the
root switch boundary.
A prominent example of a state to be kept is the DHCP lease. If an interface's address
changed during the boot, the connection to an NFS share would break, causing a boot
failure.
To ensure a seamless transition, there is a need for a mechanism to pass the state between
the two environments. However, passing the state between services having different
configuration models can be a problem.
The network-manager dracut module was created to improve this situation. The
module runs NetworkManager in the initrd to configure connection profiles generated from the
kernel command-line. Once done, NetworkManager serializes its state, which is later read by the
NetworkManager instance in the real root.
Fedora 31 was the first distribution to switch to network-manager in initrd by
default. On RHEL 8.2, network-legacy is still the default, but
network-manager is available. On RHEL 8.3, dracut will use
network-manager by default.
Enabling a different network module
While the two modules should be largely compatible, there are some differences in behavior.
Some of those are documented in the nm-initrd-generator man page. In general, it
is suggested to use the network-manager module when NetworkManager is enabled.
To rebuild the initrd using a specific network module, use one of the following
commands:
The --regenerate-all option also rebuilds all the initramfs images for the
kernel versions found on the system.
The network-manager dracut module
As with all dracut modules, the network-manager module is split into stages
that are called at different times during the boot (see the dracut.modules man page for more
details).
The first stage parses the kernel command-line by calling
/usr/libexec/nm-initrd-generator to produce a list of connection profiles in
/run/NetworkManager/system-connections . The second part of the module runs after
udev has settled, i.e., after userspace has finished handling the kernel events for devices
(including network interfaces) found in the system.
When NM is started in the real root environment, it registers on D-Bus, configures the
network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager
is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus
(since it's not available in the initrd, at least for now) and exits after reaching the
startup-complete event.
The startup-complete event is triggered after all devices with a matching connection profile
have tried to activate, successfully or not. Once all interfaces are configured, NM exits and
calls dracut hooks to notify other modules that the network is available.
Note that the /run/NetworkManager directory containing generated connection
profiles and other runtime state is copied over to the real root so that the new NetworkManager
process running there knows exactly what to do.
Troubleshooting
If you have network issues in dracut, this section contains some suggestions for
investigating the problem.
The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in
dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in
the journal.
If the system doesn't boot, it is useful to get a shell inside the initrd environment to
manually check why things aren't working. For this, there is an rd.break command-line argument.
Note that the argument spawns a shell when the initrd has finished its job and is about to give
control to the init process in the real root filesystem. To stop at a different stage of dracut
(for example, after command-line parsing), use the following argument:
The initrd image contains a minimal set of binaries; if you need a specific tool at the
dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping
and tcpdump binaries (including all their dependent libraries), run:
# dracut -f --install "ping tcpdump"
and then optionally verify that they were included successfully:
If you are familiar with NetworkManager configuration, you might want to know how a given
kernel command-line is translated into NetworkManager connection profiles. This can be useful
to better understand the configuration mechanism and find syntax errors in the command-line
without having to boot the machine.
The generator is installed in /usr/libexec/nm-initrd-generator and must be
called with the list of kernel arguments after a double dash. The --stdout option
prints the generated connections on standard output. Let's try to call the generator with a
sample command line:
$ /usr/libexec/nm-initrd-generator --stdout -- \
ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0
802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC
address
In this example, the generator reports an error because there is a missing field for the MTU
after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the
connection profile generated:
Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which
makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager
properties can be found on the nm-settings man page.
The NetworkManager dracut module is enabled by default in Fedora and will also soon be
enabled on RHEL. It brings better integration between networking in the initrd and
NetworkManager running in the real root filesystem.
While the current implementation is working well, there are some ideas for possible
improvements. One is to abandon the configure-and-quit=initrd mode and run
NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be
run in the same way as when it's run in the real root, reducing the code to be maintained and
tested.
To completely drop the configure-and-quit=initrd mode, NetworkManager should
also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module
providing a D-Bus daemon because the image should be minimal. However, there are already
proposals to include it as it is needed to implement some new features.
With D-Bus running in the initrd, NetworkManager's powerful API will be available to other
tools to query and change the network state, unlocking a wide range of applications. One of
those is to run nm-cloud-setup in the initrd. The service, shipped in the
NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers'
infrastructure (EC2, Azure, GCP) to automatically configure the network.
How to use
the Linux mtr command - The mtr (My Traceroute) command is a major
improvement over the old traceroute and is one of my first go-to tools when
troubleshooting network problems.
Linux for
beginners: 10 commands to get you started at the terminal - Everyone who works on the
Linux CLI needs to know some basic commands for moving around the directory structure and
exploring files and directories. This article covers those commands in a simple way that
places them into a usable context for those of us new to the command line.
Getting started with
systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the
basics of systemctl – a powerful tool for managing systemd services and
more.
A
beginner's guide to gawk - gawk is a command line tool that can be used for
simple text processing in Bash and other scripts. It is also a powerful language in its own
right.
In the Bash shell, file descriptors (FDs) are important in managing the input and output of
commands. Many people have issues understanding file descriptors correctly. Each process has
three default file descriptors, namely:
Code
Meaning
Location
Description
0
Standard input
/dev/stdin
Keyboard, file, or some stream
1
Standard output
/dev/stdout
Monitor, terminal, display
2
Standard error
/dev/stderr
Non-zero exit codes are usually >FD2, display
Now that you know what the default FDs do, let's see them in action. I start by creating a
directory named foo , which contains file1 .
$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1
The output No such file or directory goes to Standard Error (stderr) and is also
displayed on the screen. I will run the same command, but this time use 2> to
omit stderr:
$> ls foo/ bar/ 2>/dev/null
foo/:
file1
It is possible to send the output of foo to Standard Output (stdout) and to a
file simultaneously, and ignore stderr. For example:
$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1
Then:
$> cat ls_out_file
foo:
file1
The following command sends stdout to a file and stderr to /dev/null so that
the error won't display on the screen:
... DTrace gives the operational insights that have long been missing in the data center,
such as memory consumption, CPU time or what specific function calls are being made.
Designed for use on production systems to troubleshoot performance bottlenecks
Provides a single view of the software stack - from kernel to application - leading to
rapid identification of performance bottlenecks
Dynamically instruments kernel and applications with any number of probe points,
improving the ability to service software
Enables maximum resource utilization and application performance, as well as precise
quantification of resource requirements
Fast and easy to use, even on complex systems with multiple layers of software
Developers can learn about and experiment with DTrace on Oracle Linux by installing the
appropriate RPMs:
For Unbreakable Enterprise Kernel Release 5 (UEK5) on Oracle Linux 7
dtrace-utils and dtrace-utils-devel .
For Unbreakable Enterprise Kernel Release 6 (UEK6) on Oracle Linux 7 and Oracle Linux 8
dtrace and dtrace-devel .
Stability
It's well known that Red Hat Enterprise Linux is created from the most stable and tested
Fedora innovations, but since Oracle Linux was grown from the RHEL framework yet includes
additional, built-in integrations and optimizations specifically tailored for Oracle
products, our comparison showed that Oracle Linux is actually more stable for enterprises
running Oracle systems , including Oracle databases.
Flexibility
As an industry leader, RHEL provides a wide range of integrated applications and tools that
help tailor fit the Red Hat Enterprise Linux system to highly specific business needs.
However, once again Oracle Linux was found to excel over RHEL because OL offered the Red Hat
Compatible Kernel (RHCK), option, which enables any RHEL-certified app to run on Oracle Linux
. In addition, OL offers its own network of ISVs / third-party solutions, which can help
personalize your Linux setup even more while integrating seamlessly with your on-premises or
cloud-based Oracle systems.
If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on
2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to
~ 365) because this OS will shift from major-minor point updates to a streaming model at the
end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat
started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means
you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL
(Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a
major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A
couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try
it out (the installer actually had a few features I liked better than the CentOS-7 installer).
When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an
HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my
employer's solution over to "OpenSUSE Leap".
Parting thoughts:
openSUSE is run out of Germany. IMHO switching over to a European distro is similar to
those database people who preferred MariaDB to MySQL when Oracle was still hoping that
MySQL would die from neglect.
Someone cracked off to me the other day that now that IBM is pulling strings at "Red
Hat", that the company should be renamed "Blue Hat"
I downloaded and tried it last week and was actually pretty impressed. I have only ever
tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still
impressed. I'd recommend people take a look.
I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I
never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW
also is pretty sweet and if I were to use a rolling release, it would be among the top
contenders.
One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I
guess, disable the root account. You can't do it at install, if you leave the root account
blank suse, will just assign the password for the user you created to it.
Of course afterwards you can disable it with the proper commands but it becomes a pain with
YAST, as it seems YAST insists on being opened by root.
One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I
guess, disable the root account. You can't do it at install, if you leave the root account
blank suse, will just assign the password for the user you created to it.
I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply
deselected the option for the account used to be an administrator; this required me to set a
different password for administrative purposes.
I think you might.
My point is/was that if I select to choose my regular user to be admin, I don't expect for
the system to create and activate a root account anyways and then just assign it my
password.
I expect the root account to be disabled.
I was surprised, too. I was bit "shocked" when I realized, after the install, that I could
login as root with my user password.
At the very least, IMHO, it should then still have you set the root password, even if you
choose to make your user admin.
It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to
give it a different password.
But other than that subjective issue I found OpenSUSE Leap a very solid distro.
The big academic labs (Fermilab, CERN and DESY to only name three of many used to run
something called Scientific Linux which was also maintained by Red Hat.see: https://scientificlinux.org/ and https://en.wikipedia.org/wiki/Scientific_Linux
Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin
migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat)
11 comments 67% Upvoted Log in or sign up to leave a comment
Log In Sign Up Sort by level 1
Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly
a community distribution, Scientific Linux was an independent rebuild of the RHEL source code
published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly
different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is
not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if
you have issues with that, it's more up to you to fix them than the SL maintainers.)
I fear you are correct. I just stumbled onto this article: https://www.linux.com/training-tutorials/scientific-linux-great-distro-wrong-name/
Even the wikipedia article states "This product is derived from the free and open-source
software made available by Red Hat, but is not produced, maintained or supported by them."
But it does seem that Scientific Linux was created as a replacement for Fermilab Linux.
I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page
up to date because what I am seeing here ( https://linux.web.cern.ch/ ) today is not what I saw
2-weeks ago.
RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to
have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they
needed to (though I don't really think they needed to)
SL has always been an independent rebuild. It has never been maintained, sponsored, or
owned by Red Hat. They decided on their own to not build 8 and instead collaborate on
CentOS. They even gained representation on the CentOS board (one from Fermi, one from
CERN).
I'm not affiliated with any of those organizations, but my guess is they will switch to
some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).
Is anybody considering switching to RHEL's free non-production developer
subscription? As I understand it, it is free and receives updates.
The only downside as I understand it is that you have to renew your license every year (and
that you can't use it in commercial production).
In redhat-based distros, the yum tool has a distro-sync command which will synchronize
packages to the current repositories. This command is useful for returning to a base state if
base packages have been modified from an outside source. The docs for the command is:
distribution-synchronization or distro-sync Synchronizes the installed package set with
the latest packages available, this is done by either obsoleting, upgrading or downgrading as
appropriate. This will "normally" do the same thing as the upgrade command however if you
have the package FOO installed at version 4, and the latest available is only version 3, then
this command will downgrade FOO to version 3.
In
view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for
CentOS, the script can probably be slightly enhanced.
The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For
exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case
it was invoked the second time after the failure of the step "Installing base packages for Oracle
Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital
packages ;-).
And failures on this step are probably the most common type of failures in conversion.
Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running
the script the second time.
It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the
function remove_repos, because in their excessive zeal to restore the system after error the
programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot
of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably
should be blocked if Oracle repositories are already present. This check is absent.
After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of
the deleted packages are in yum log, so you can reinstall them and something it helps. In other
cases system remain unbootable and the restore from the backup is the only option.
Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics.
The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and
make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which
can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a
separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure
to write a new kernel to it due to the lack of free space leads to the situation of "half-baked"
installation, which is difficult to recover without senior sysadmin skills.
Oracle Linux is free to download, distribute and use (even in production) and has been
since its release over 14 years ago
Installation media, updates and source code are all publicly available on the Oracle
Linux yum server with no login or authentication requirements
Since its first release in 2006, Oracle Linux has been 100% application binary
compatible with the equivalent RHEL version. In that time, we have never had a
compatibility bug logged.
The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux.
Let's take a look at just how simple the process is.
Download the centos2ol.sh
script from GitHub
The simplest way to get the script is to use curl :
$ curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241
If you have git installed, you could clone the git repository from GitHub
instead.
Run the centos2ol.sh script to switch to Oracle Linux
To switch to Oracle Linux, just run the script as root using sudo
:
As part of the process, the default kernel is switched to the latest release of Oracle's
Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability
improvements to the process scheduler, memory management, file systems, and the networking
stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible
Kernel (RHCK) which may be required by any specific hardware or application that has
imposed strict kernel version restrictions.
Switching the default kernel (optional)
Once the switch is complete, but before rebooting, the default kernel can be changed
back to the RHCK. First, use grubby to list all installed kernels:
In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel
version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by
the switch process while the third one is the kernel that were installed by CentOS and the
final entry is the rescue kernel.
Next, use grubby to verify that UEK is currently the default boot option:
To replace the default kernel, you need to specify either the path to its
vmlinuz file or its index. Use grubby to get that information for the
replacement:
Finally, use grubby to change the default kernel, either by providing the
vmlinuz path:
[demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
Or its index:
[demo@c8switch ~]$ sudo grubby --set-default-index 1
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
Changing the default kernel can be done at any time, so we encourage you to take UEK for
a spin before switching back.
The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM
DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old
content. Money greedy incompetents is what current upper IBM managers really is...
How to be a more productive Linux systems administrator
Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the
universe...well, maybe not the universe, but you will need these tips to play in the big
leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples
accompany each trick, so you can duplicate them on your own systems.
The best systems administrators are set apart by their efficiency. And if an efficient
systems administrator can do a task in 10 minutes that would take another mortal two hours to
complete, then the efficient systems administrator should be rewarded (paid more) because the
company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won't attempt to cover
that trick in this article, I will give you 10 essential gems from the lazy admin's bag
of tricks. These tips will save you time -- and even if you don't get paid more money to be
more efficient, you'll at least have more time to play Halo.
The newbie states that when he pushes the Eject button on the DVD drive of a server running
a certain Redmond-based operating system, it will eject immediately. He then complains that, in
most enterprise Linux servers, if a process is running in that directory, then the ejection
won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk
on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD
drive. But this is ineffective.
Here's how you find the process that holds your DVD drive and eject it to your heart's
content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the
DVD drive:
# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done
Now open up a second terminal and try to eject the DVD drive:
# eject
You'll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let's find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
Behold! Your terminal looks like garbage. Everything you type looks like you're looking into
the Matrix. What do you do?
You type reset . But wait you say, typing reset is too close to
typing reboot or shutdown . Your palms start to sweat -- especially
if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead,
do it:
# reset
Now your screen is back to normal. This is much better than closing the window and then
logging in again, especially if you just went through five machines to SSH to this
machine.
David, the high-maintenance user from product engineering, calls: "I need you to help me
understand why I can't compile supercode.c on these new machines you deployed."
"Fine," you say. "What machine are you on?"
David responds: " Posh." (Yes, this fictional company has named its five production servers
in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another
machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
"Hey David, run the following command on your terminal: # screen -x foo ."
This will cause your and David's sessions to be joined together in the holy Linux shell. You
can type or he can type, but you'll both see what the other is doing. This saves you from
walking to the other floor and lets you both have equal control. The benefit is that David can
watch your troubleshooting skills and see exactly how you solve problems.
At last you both see what the problem is: David's compile script hard-coded an old directory
that does not exist on this new server. You mount it, recompile, solve the problem, and David
goes back to work. You then go back to whatever lazy activity you were doing before.
The one caveat to this trick is that you both need to be logged in as the same user. Other
cool things you can do with the screen command include having multiple windows and
split screens. Read the man pages for more on that.
But I'll give you one last tip while you're in your screen session. To detach
from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key
and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo command
again.
You forgot your root password. Nice work. Now you'll just have to reinstall the entire
machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to
get on the machine and change the password. This doesn't work in all cases (like if you made a
GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS
Linux example.
First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure
1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a
normal boot.
Use the arrow key again to highlight the line that begins with kernel , and
press E to edit the kernel parameters. When you get to the screen shown in Figure 3,
simply append the number 1 to the arguments as shown in Figure 3:
Many times I'll be at a site where I need remote support from someone who is blocked on the
outside by a company firewall. Few people realize that if you can get out to the world through
a firewall, then it is relatively easy to open a hole so that the world can come into you.
In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH
back door . To use it, you'll need a machine on the Internet that you can use as an
intermediary.
In our example, we'll call our machine blackbox.example.com. The machine behind the company
firewall is called ginger. Finally, the machine that technical support is on will be called
tech. Figure 4 explains how this is set up.
Check that what you're doing is allowed, but make sure you ask the right people. Most
people will cringe that you're opening the firewall, but what they don't understand is that
it is completely encrypted. Furthermore, someone would need to hack your outside machine
before getting into your company. Instead, you may belong to the school of
"ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me
if this doesn't go your way.
SSH from ginger to blackbox.example.com with the -R flag. I'll assume that
you're the root user on ginger and that tech will need the root user ID to help you with the
system. With the -R flag, you'll forward instructions of port 2222 on blackbox
to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can
come into ginger: You're not putting ginger out on the Internet naked.
VNC or virtual network computing has been around a long time. I typically find myself
needing to use it when the remote server has some type of graphical program that is only
available on that server.
For example, suppose in Trick 5 , ginger is a storage
server. Many storage devices come with a GUI program to manage the storage controllers. Often
these GUI management tools need a direct connection to the storage through a network that is at
times kept in a private subnet. Therefore, the only way to access this GUI is to do it from
ginger.
You can try SSH'ing to ginger with the -X option and launch it that way, but
many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much
more network-friendly tool and is readily available for nearly all operating systems.
Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get
VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports
instead. Here's what you do:
Start a VNC server session on ginger. This is done by running something like:
The options tell the VNC server to start up with a resolution of 1024x768 and a pixel
depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a
better option. Using :99 specifies the port the VNC server will be accessible
from. The VNC protocol starts at 5900 so specifying :99 means the server is
accessible from port 5999.
When you start the session, you'll be asked to specify a password. The user ID will be
the same user that you launched the VNC server from. (In our case, this is root.)
SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger.
This is done from ginger by running the command:
Once you run this command, you'll need to keep this SSH session open in order to keep
the port forwarded to ginger. At this point if you were on blackbox, you could now access
the VNC session on ginger by just running:
thedude@blackbox:~$ vncviewer localhost:99
That would forward the port through SSH to ginger. But we're interested in letting tech
get VNC access to ginger. To accomplish this, you'll need another tunnel.
From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox.
This would be done by running:
This time the SSH flag we used was -L , which instead of pushing 5999 to
blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session
open. Now you're ready to VNC from tech!
From tech, VNC to ginger by running the command:
root@tech:~# vncviewer localhost:99 .
Tech will now have a VNC session directly to ginger.
While the effort might seem like a bit much to set up, it beats flying across the country to
fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.
Let me add a trick to this trick: If tech was running the Windows® operating system and
didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH
ports by looking in the options in the sidebar. If the port were 5902 instead of our example of
5999, then you would enter something like in Figure 5.
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a
client node named beckham. Company A has decided they really want to get more bandwidth out of
ginger because they have lots of nodes they want to have NFS mount ginger's shared
filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together.
This is cheapest because usually you have an extra on-board NIC and an extra port on your
switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from?
Well,
You'll need to install it on a shared filesystem that both ginger and beckham can see. or
compile and install on both nodes. I'll compile it in the home directory of the bob user that
is viewable on both nodes:
tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
You'll see output in both screens telling you what the speed is. On a normal server with a
Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is
lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with
two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this
gives you a good indication that your bandwidth is going to be about what you'd expect. If you
see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs that used
different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth,
less than they would have gotten had they not bonded the Ethernet cards
together!
A Linux systems administrator becomes more efficient by using command-line scripting with
authority. This includes crafting loops and knowing how to parse data using utilities like
awk , grep , and sed . There are many cases where doing
so takes fewer keystrokes and lessens the likelihood of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you
are about to install. The long way would be to add IP addresses in vi or your favorite text
editor. However, it can be done by taking the already existing /etc/hosts file and appending
the following to it by running this on the command line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses
192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of
inadvertently creating duplicate IP addresses or host names, so this is a good example of using
the built-in command line to eliminate user errors. Please note that this is done in the bash
shell, the default in most Linux distributions.
As another example, let's suppose you want to check that the memory size is the same in each
of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or
parallel shell would be the best practice, but for the sake of illustration, here's a way to do
this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular expressions
in it.) Let's pick it apart and uncover the mystery.
First you're doing a loop through 001-200. This padding with 0s in the front is done with
the -w option to the seq command. Then you substitute the
num variable to create the host you're going to SSH to. Once you have the target
host, give the command to it. In this case, it's:
free -m | grep Mem | awk '{print $2}'
That command says to:
Use the free command to get the memory size in megabytes.
Take the output of that command and use grep to get the line that has the
string Mem in it.
Take that line and use awk to print the second field, which is the total
memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200 nodes is
piped ( | d) to the sort command so that all the memory values are
sorted.
Finally, you eliminate duplicates with the uniq command. This command will
result in one of the following cases:
If all the nodes, n001-n200, have the same memory size, then only one number will be
displayed. This is the size of memory as seen by each operating system.
If node memory size is different, you will see several memory size values.
Finally, if the SSH failed on a certain node, then you may see some error messages.
This command isn't perfect. If you find that a value of memory is different than what you
expect, you won't know on which node it was or how many nodes there were. Another command may
need to be issued for that.
What this trick does give you, though, is a fast way to check for something and quickly
learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty
check.
Some software prints error messages to the console that may not necessarily show up on your
SSH session. Using the vcs devices can let you examine these. From within an SSH session, run
the following command on a remote server: # cat /dev/vcs1 . This will show you
what is on the first console. You can also look at the other virtual terminals using 2, 3, etc.
If a user is typing on the remote system, you'll be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best
way to view this information; it also provides the additional benefit of out-of-band viewing
capabilities. Using the vcs device provides a fast in-band method that may be able to save you
some time from going to the machine room and looking at the console.
In Trick 8 , you saw an example of using the command
line to get information about the total memory in the system. In this trick, I'll offer up a
few other methods to collect important information from the system you may need to verify,
troubleshoot, or give to remote support.
First, let's gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo .
This command gives you information on the processor speed, quantity, and model. Using
grep in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the system. So,
if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l .
I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to
send me another processor.
Another piece of information I may require is disk information. This can be gotten with the
df command. I usually add the -h flag so that I can see the output in
gigabytes or megabytes. # df -h also shows how the disk was partitioned.
And to end the list, here's a way to look at the firmware of your system -- a method to get
the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode command. Unfortunately,
you can't easily grep for the information, so piping it is a less efficient way to
do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...
This is much more efficient than rebooting your machine and looking at the POST output.
To examine the driver and firmware versions of your Ethernet adapter, run
ethtool :
There are thousands of tricks you can learn from someone's who's an expert at the command
line. The best ways to learn are to:
Work with others. Share screen sessions and watch how others work -- you'll see
new approaches to doing things. You may need to swallow your pride and let other people
drive, but often you can learn a lot.
Read the man pages. Seriously; reading man pages, even on commands you know like
the back of your hand, can provide amazing insights. For example, did you know you can do
network programming with awk ?
Solve problems. As the system administrator, you are always solving problems
whether they are created by you or by others. This is called experience, and experience makes
you better and more efficient.
I hope at least one of these tricks helped you learn something you didn't know. Essential
tricks like these make you more efficient and add to your experience, but most importantly,
tricks give you more free time to do more interesting things, like playing video games. And the
best administrators are lazy because they don't like to work. They find the fastest way to do a
task and finish it quickly so they can continue in their lazy pursuits.
Vallard Benincosa is a lazy Linux Certified IT professional working for
the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's
in-house distro, ClearOS .
Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line,
and the company offers a free Community version to all comers.
ClearOS is an open source software platform that leverages the open source model to
deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the
integration of free open source technologies making it easier to use. By not charging for
open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for
the products and services they need and value.
ClearOS is mostly notable here for its association with industry giant HPE and its
availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the
most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a
bit outdated compared with other options, it also appears to be a rolling release itself --
more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.
ClearOS is probably most interesting to small business types who might consider buying
ProLiant servers with RHEL-compatible OEM Linux pre-installed later.
I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a
CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor
of CentOS. However, Springdale
Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for
the academic scientific community. Unlike Scientific Linux, it's still actively maintained!
Springdale Linux is maintained and made available by Princeton and Rutgers universities, who
use it for their HPC projects. It has been around for quite a long time. One Springdale Linux
user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS --
Princeton University Institute for Advanced Study) as a 10-year ride.
Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good
seat-of-the-pants overview in a recent mailing list discussion :
The School of Mathematics at the Institute for Advanced Study has been using Springdale
(formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations
(yes, workstations) are running Springdale. On the server side, everything "just works", as
is expected from a RHEL clone. On the workstation side, most of the issues we run into have
to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc),
but most issues have been resolved or have a workaround in place.
... Springdale is a community project, and [it] mostly comes down to the hours (mostly
Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some
are like us and use the whole thing. Others use a different OS and use Springdale just for
its computational repositories.
Springdale Linux should be a natural fit for universities and scientists looking for a
CentOS replacement. It will likely work for most anyone who needs it -- but its
relatively small community and firm roots in academia will probably make it the most
comfortable for those with similar needs and environments.
64 • "best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United
States) @62
dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries
as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The
Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know
who.
• Centos (by David on 2020-12-22
04:29:46 GMT from United States)
I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term
support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out
of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that
Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!
Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501,
the graphics were garbage --the screen displayed kind of a color mosaic --and the
keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from
minimal dvd produced a bunch of errors and then froze part way through.
I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to
migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st
quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated
that they would not.
"Personal opinion only. [...] After all the years of using Linux, and experiencing
first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which
has all the earmarks of [...] being developed AND MAINTAINED by a professional
organization."
Yeah, your answer is exactly what I expected it to be.
The thing with Springdale is as following: it's maintained by the very professional team of
IT specialists at the Institute for Advanced Study (Princeton University) for the own needs.
That's why there's no fancy website, RHEL Wiki, live ISOs and such.
They also maintain several other repositories for add-on packages (computing, unsupported
[with audio/video codecs] ...).
With other words, if you're a professional who needs an RHEL clone, you'll be fine with it;
if you're a hobbyist who needs a how-to on everything and anything, you can still use the
knowledge base of RHEL/CentOS/Oracle ...
If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS,
Springdale is not a commercial distribution selling you support and schooling. Springdale is
made by professional and for the professionals.
In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried
out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I
had just published Linux on the
onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99%
like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their
future Linux administrators.
All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones
like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS
project, etc. I didn't care any more, since CentOS perfectly met all my server needs.
Following the disastrous announcement of the CentOS project, I had a discussion with my
compatriot Michael Kofler, a Linux guru who
has published a series of excellent books on our favorite operating system, and who has
migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the
University of Graz. We were not in our first discussion on this subject, as the CentOS project
was already accumulating a series of rather worrying delays for version 8 updates. In
comparison, Oracle Linux does not suffer from these structural problems, so I kept this option
in a corner of my head.
A problematic reputation
Oracle suffers from a problematic reputation within the free software community, for a
variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL
and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his
unhinged support for Donald Trump. As for the company's commercial policy, it has been marked
by a notorious aggressiveness in the hunt for patents.
On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer
workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise
since 2006, and which is also a free and free operating system.
Install Oracle Linux
For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my
workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure
is identical to that of RHEL and CentOS, with a few small details.
Normally, I never care about banner ads that scroll through graphic
installers. This time, the slogan Free to use, free to download, free to update.
Always still caught my attention.
An indestructible kernel?
Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named
Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces
older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux
7.9.
$ uname -a
Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48
PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Well-crafted packet deposits
At first glance, the organization of official and semi-official package filings seems much
clearer and better organized than under CentOS. For details, I refer you to the respective
explanatory pages for the 7.x and 8.x versions.
Like the organization of deposits, Oracle Linux's documentation is
worth mentioning here, because it is simply exemplary. The main index refers to the different
versions of Oracle Linux, and from there, you can access a whole series of documents in HTML
and PDF formats that explain in detail the peculiarities of the system and its day-to-day
management. As I go along with this documentation, I discover a multitude of pleasant little
details, such as the fact that Oracle packages display metadata for security updates, which is
not the case for CentOS packages.
Migrating from CentOS to Oracle Linux
The Switch your CentOS
systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a
better choice than CentOS when you want to have a company-grade free as in free beer
operating system, which provides low-risk updates for each version over a decade. This page
also features a script that transforms an existing CentOS system into a two-command Oracle
Linux system on the fly. centos2ol.sh
The script grinds about twenty minutes, we restart the machine and we end up with a clean
Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.
# rm -f /etc/yum.repos.d/*.repo.deactivated
Migrating a CentOS 8.x server?
At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9.
On a whim, I sent an email to the address at the bottom of the page, asking if support for
CentOS 8.x was expected in the near future. centos2ol.sh
A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the
questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an
updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle
Linux 8.x.
So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.
Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a
public machine running oracle Linux 8.
Conclusion
I will probably have a lot more to say about that. For my part, I find this first experience
with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will
probably solve a common problem to a lot of admins of production servers who do not support
their system becoming a moving target overnight.
Post Scriptum for the chilly purists
Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux
without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained
by Princeton University in the United States according to the principle WYGIWYG (What You
Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but
it works just as well.
Writing this documentation takes time and significant amounts of espresso coffee. Do you
like this blog? Give the editor a coffee by clicking on the cup.
"... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
"... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
"... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
• And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)
If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or
Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done
right". Here's a blog article about the subject:
Currently Rocky Linux is not much more than a README file on Github and a handful of Slack
(ew!) discussion channels.
Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And
Debian's support cycle (one year after subsequent release) is unusable for production
servers.
9 • @Jesse on CentOS: (by dragonmouth
on 2020-12-21 13:11:04 GMT from United States)
"There is no rush and I recommend waiting a bit for the dust to settle on the situation before
leaping to an alternative. "
For private users there may be plenty of time to find an alternative. However, corporate IT
departments are not like jet skis able to turn on a dime. They are more like supertankers or
aircraft carriers that take miles to make a turn. By the time all the committees meet and come
to some decision, by the time all the upper managers who don't know what the heck they are
talking about expound their opinions and by the time the CentOS replacement is deployed, a year
will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to
start looking for the O/S that will replace CentOS.
Does this mean no more SIGs too? OEL 8 is about to see a giant surge in utilization!
reply link
Just a geek Dec 8, 2020 @ 23:45
Time to move to Oracle Linux. One of their partners is always talking about it, and being it is free, and tracks RHEL with
100% binary compatibly it's a good fit for use. Also looked at their support costs, and it's a fraction of RHEL pricing!
Kyle Dec 9, 2020 @ 2:13
It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like
that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply
it to work.
I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20
As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui
Dec 9, 2020 @ 4:49
Hello CentOS users,
I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year
support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our
donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide.
Holmes Dec 9, 2020 @ 5:06
Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat
found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like
"EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08
For me it's simple.
I will keep my openSUSE Leap and expand it's footprint.
Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the
RHEL kernel.
OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications
as well. Someone Dec 9, 2020 @ 5:23
While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to
be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24
That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're
now a happy and successful FreeBSD shop.
" People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary
compatibility to RHEL 8 and security updates until 2029."
One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will
simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and
control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That
may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority
of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
Notable quotes:
"... availability gap ..."
"... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
"... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
"... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
"... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
"... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
"... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
"... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
"... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
"... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company
with an open source development methodology.
To do so Red Hat made a hard decision and in 2003
split Red Hat Linux into Red Hat
Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized,
and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone
to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a
single project. After that, each distribution flourished within its intended audiences.
But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving,
stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap
-- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability
gap, but they were closed to contributions to the core Linux distro itself.
In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform
for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from.
At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates
in Fedora led to breakage in these layered projects.
We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project
core team, Red Hat and the CentOS Project agreed to "
join forces ." We said
joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being
just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project
that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever
before -- a partial solution to the openness gap.
Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful
side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of
the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from
the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or
Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've
all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS
and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.
Over the past six years since finally joining forces, we made good progress on those goals. We started
Special Interest Groups (SIGs) to manage the
layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a
governance structure where there hadn't been one before. We brought
RHEL source code to be housed at git.centos.org . We designed and built out
a significant public build infrastructure and
CI/CD system in a project that had previously been sealed-boxes all the way
down.
"This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to
CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution
gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."
Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS
Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.
Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS
either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them
do the work? Be thankful for a distribution that you have been using so far, and move on.
We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project.
THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to
choose to deal with whatever consequences come from the loss of goodwill in the community.
If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle
expectations for CentOS 7.
Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people
actually knowing the intrinsics of how CentOS Linux was being developed.
If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.
No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use
beta testers (CentOS Stream *is* beta, whatever you say).
I predict you will be somewhat amazed at the actual results.
Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?
you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving
the EOL of CentOS Linux 8 from 2029 to 2021.
You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific
Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize
on CentOS 8? This trickled down to a load of labs and research institutions.
Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates
you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces
stuff today and vanishes a year later.
The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the
changes.
I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS
to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be
deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.
What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and
may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information,
how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?
And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview
of what's next in RHEL".
I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I
can trust to get updates until the given EOL date.
I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8
with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch.
Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.
I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also
have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short
of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.
Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably
want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have
more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.
IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say
you are reading them, but choose to ignore it and that is even worse!
People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first
place?
Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no,
you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.
Why didn't you even bother asking the community first and come to a compromise or something?
Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet!
So stop pretending and address it properly!
Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent
in support of becoming a contributor-focused open source project"
What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this
pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.
I can only comment this as disappointment, if not betrayal, to whole CentOS user base.
This decision was clearly done, without considering impact to majority of CentOS community use cases.
If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream.
Clear and simple. All other 'explanations' are cover ups for real purpose of this action.
This stinks of politics within IBM/RH meddling with CentOS.
I hope, Rocky will bring the desired stability, that community was relying on with CentOS.
We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was
a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.
The intellectually easy answer to what is happening is that IBM is putting pressure on Red
Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase
and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or
elsewhere if they do not want to pay...
It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last
time and was being put out to die. Statements like:
If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.
That line sure seemed like horrific marketing speak for "call our sales people and open your
wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain
).
... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now
be upstream of the next RHEL minor release .
... ... ...
I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of
my needs, Rocky Linux may have a place in my life as well, as an example powering my home
router. Generally speaking, I want my router to be as boring as absolute possible. That said
even that may not stay true forever, if for example CentOS gets good WireGuard support.
Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options.
Keep an eye out for those! I have no idea the details, but if you currently use CentOS for
personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again,
this is just my speculation (I have zero knowledge of this beyond what has been shared
publicly), but I'm personally excited.
It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last
time and was being put out to die. Statements like:
If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.
That line sure seemed like horrific marketing speak for "call our sales people and open your
wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain
).
... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now
be upstream of the next RHEL minor release .
... ... ...
I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of
my needs, Rocky Linux may have a place in my life as well, as an example powering my home
router. Generally speaking, I want my router to be as boring as absolute possible. That said
even that may not stay true forever, if for example CentOS gets good WireGuard support.
Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options.
Keep an eye out for those! I have no idea the details, but if you currently use CentOS for
personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again,
this is just my speculation (I have zero knowledge of this beyond what has been shared
publicly), but I'm personally excited.
Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So
"Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent
projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it
is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises
deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less
licesses, not more by adopting "One IBM way"
On Hacker News , the leading comment was: "Imagine if you were running
a business, and deployed CentOS 8 based on the 10-year lifespan
promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9????
Let's not sugar coat this. They've betrayed us."
A popular tweet from The Best Linux Blog In the Unixverse, nixcraft
, an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to
/dev/null. IBM buys Red Hat: CentOS is going to
>/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure
ASAP."
Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away
from them. Still, others screamed Red Hat was betraying open-source itself.
... ... ...
Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before
Red Hat became
a billion-dollar business .
There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of
them. The Websense appliance runs the base OS of CentOS, on top of which runs their
Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.
Likewise, there are now countless Internet servers out there that run CentOS. There's now a
huge user base of CentOS out there.
This is why the Debian project is so important. I will be converting everything that is
currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also
probably a good idea.
former Red Hat executive confided, "CentOS was gutting sales. The customer perception was
'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate
copy." From where, this person sits, "This is 100% defensive to stave off more losses to
CentOS."
Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a
10-billion dollar company before Red
Hat became a billion-dollar business .
Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:
CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise
Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red
Hat® Enterprise Linux.
CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.
CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red
Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.
Patch
is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the
difference between original file and new file. In order to get the difference or patch we use
diff
tool.
Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting
whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes
applied to the old file and than new file or patched file is compiled for new version of software.
Now
we will create patch file in this step but we need some simple source code with two different version. We call the source
code file name as
myapp.c
.
#include <stdio.h>
void main(){
printf("Hi poftut");
printf("This is new line as a patch");
}
Now
we will create a patch file named
myapp.patch
.
$ diff -u myapp_old.c myapp.c > myapp.patch
Create
Patch File
We can print
myapp.patch
file
with following command
$ cat myapp.patch
Apply Patch File
Now
we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which
is named
myapp_old.patch
.
We will simply apply this patch file. Here is what the patch file contains
the name of the patched file
the different content
$ patch < myapp.patch
Apply
Patch File
Take Backup Before Applying Patch
One
of the useful feature is taking backups before applying patches. We will use
-b
option
to take backup. In our example we will patch our source code file with
myapp.patch
.
$ patch -b < myapp.patch
Take
Backup Before Applying Patch
The backup name will be the same as source code file just adding the
.orig
extension.
So backup file name will be
myapp.c.orig
Set Backup File Version
While
taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There
is
-V
option
which will set the versioning mechanism of the original file. In this example we will use
numbered
versioning.
$ patch -b -V numbered < myapp.patch
Set
Backup File Version
As we can see from screenshot the new backup file is named as number like
myapp.c.~1~
Validate Patch File Without Applying or Dry run
We
may want to only validate or see the result of the patching. There is a option for this feature. We will use
--dry-run
option
to only emulate patching process but not change any file really.
$ patch --dry-run < myapp.patch
Reverse Patch
Some
times we may need to patch in reverse order. So the apply process will be in reverse. We can use
-R
parameter
for this operation. In the example we will patch
myapp_old.c
rather
than
myapp.c
First, make a copy of the source tree: ## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/
Cd to lighttpd-1.4.35-new directory and make changes as per your requirements: $ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile
Finally, create a patch with the following command: $ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch
You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server
using patch command as discussed above: patch -p1
See the man page of patch and other command for more information and usage - bash(1)
First, make a copy of the source tree: ## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/
Cd to lighttpd-1.4.35-new directory and make changes as per your requirements: $ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile
Finally, create a patch with the following command: $ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch
You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server
using patch command as discussed above: patch -p1
See the man page of patch and other command for more information and usage - bash(1)
Patch
is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the
difference between original file and new file. In order to get the difference or patch we use
diff
tool.
Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting
whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes
applied to the old file and than new file or patched file is compiled for new version of software.
Now
we will create patch file in this step but we need some simple source code with two different version. We call the source
code file name as
myapp.c
.
#include <stdio.h>
void main(){
printf("Hi poftut");
printf("This is new line as a patch");
}
Now
we will create a patch file named
myapp.patch
.
$ diff -u myapp_old.c myapp.c > myapp.patch
Create
Patch File
We can print
myapp.patch
file
with following command
$ cat myapp.patch
Apply Patch File
Now
we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which
is named
myapp_old.patch
.
We will simply apply this patch file. Here is what the patch file contains
the name of the patched file
the different content
$ patch < myapp.patch
Apply
Patch File
Take Backup Before Applying Patch
One
of the useful feature is taking backups before applying patches. We will use
-b
option
to take backup. In our example we will patch our source code file with
myapp.patch
.
$ patch -b < myapp.patch
Take
Backup Before Applying Patch
The backup name will be the same as source code file just adding the
.orig
extension.
So backup file name will be
myapp.c.orig
Set Backup File Version
While
taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There
is
-V
option
which will set the versioning mechanism of the original file. In this example we will use
numbered
versioning.
$ patch -b -V numbered < myapp.patch
Set
Backup File Version
As we can see from screenshot the new backup file is named as number like
myapp.c.~1~
Validate Patch File Without Applying or Dry run
We
may want to only validate or see the result of the patching. There is a option for this feature. We will use
--dry-run
option
to only emulate patching process but not change any file really.
$ patch --dry-run < myapp.patch
Reverse Patch
Some
times we may need to patch in reverse order. So the apply process will be in reverse. We can use
-R
parameter
for this operation. In the example we will patch
myapp_old.c
rather
than
myapp.c
Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8
migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.
Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and
you've got yourself a real winner.
"... If you need official support, Oracle support is generally cheaper than RedHat. ..."
"... You can legally run OL free and have access to patches/repositories. ..."
"... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
"... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
"... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
My office switched the bulk of our RHEL to OL years ago, and find it a great product, and
great support, and only needing to get support for systems we actually want support on.
Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some
EL4 systems I still have running. (Its a matter of going through the list of installed
packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the
package (without dependencies) from the OL ISO.)
We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of
servers both for internal use and hundred or so customers.
Not a single time regretted, had any issues or were tempted to move to RedHat let alone
CentOS.
I found Oracle Linux has several advantages over RedHat/CentOS:
If you need
official support, Oracle support is generally cheaper than RedHat.You can legally
run OL free and have access to patches/repositories.Full binary compatibility with
RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle
Linux as well. It is very easy to switch between supported and free setup (say, you have
proof-of-concept setup running free OL, but then it is being promoted to production status -
just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You
can easily move licensed/support from one box to another so you always run the same OS and do
not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice
to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel
with minimal modification - just newer). We generally run Oracle kernels on all boxes unless
we have to support particularly pedantic customer who insist on using old RedHat kernel.
Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.
Overall, it is pleasure to work and support OL.
Negatives:
I found RedHat knowledge base / documentation is much better than Oracle's
Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server,
etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers
customers to use them. Some complain about quality of Oracle's support. Can't really comment
on that. Had no much exposure to RedHat support, maybe used it couple of times and it was
good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over
the last few years support quality for Linux has improved noticeably - guess Oracle pushes
their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka
OCI runs on Oracle Linux).
Forgot to mention that converting RedHat Linux to Oracle is very straightforward -
just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you
can do it with CentOS (maybe possible, just never needed to convert CentOS to
Oracle).
At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the
platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to
repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is
just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you
does not matter. And organized minority always beat disorganized majority. Actually most large organizations will
probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the
particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and
how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and
they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool
of corporate mind control over the user population. Feat is a powerful instrument of mind control.
In reality security of most systems on internal network does no change one bit with patches. And on external network only
applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the
whole world)
Notable quotes:
"... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
"... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
"... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
"... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
"... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
"... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
"... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
"... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
"... More likely to drive people entirely out of the RHEL ecosystem. ..."
"... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/ ..."
"... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
"... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
"... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
"... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
"... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
"... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
"... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
"... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
"... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
"... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
"... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
"... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
"... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
"... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
"... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
"... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
"... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
"... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild
of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use
another distribution. Do you realize how much market share you will be losing and how much
chaos you will be creating with this?
"If you are using CentOS Linux 8 in a production environment, and are concerned that
CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options".
So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch
to RHEL?
That's exactly what they're saying. We all knew from the moment IBM bought Redhat
that we were on borrowed time. IBM will do everything they can to push people to RHEL even if
that includes destroying a great community project like CentOS.
Wow. Well, I guess that means the tens of thousands of cores of research compute I
manage at a large University will be migrating to Debian. I've just started preparing to
shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only
just started - not much work to throw away.
An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged
but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8
is final?
Unless the community can center on a new single proper fork of RHEL, it makes the
most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability
terms.
Already existing functioning distribution ecosystem, can probably do good with influx
of resources to enhance the missing bits, such as further improving SELinux support and
expanding Debian security team.
I say this without any official or unofficial involvement with the Debian project,
other than being a user.
And we have just launched hundred of Centos 8 servers.
Another one bites the dust due to corporate greed, which IBM exemplifies. This is
why I shuddered when they bought RH. There is nothing that IBM touches that gets better,
other than the bottom line of their suits!
This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux
and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert
from earlier versions to 8, and now will have to review that work with my
customer.
I just finished building a CentOS 8 web server, worked out all the nooks and
crannies and was very satisfied with the result. Now I have to do everything from scratch?
The reason why I chose this release was that every website and its brother were giving a 2029
EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's
unbelievable.
What a colossal blunder: a pivot from the long-standing mission of an OS providing
stability, to an unstable development platform, in a manner that betrays its current users.
They should remove the "C" from CentOS because it no longer has any connection to a community
effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL
to a paid RHEL subscription? More likely to drive people entirely out of the RHEL
ecosystem.
From a RHEL perspective I understand why they'd want it this way. CentOS was
probably cutting deep into potential RedHat license sales. Though why or how RedHat would
have a say in how CentOS is being run in the first place is.. troubling.
From a CentOS perspective you may as well just take the project out back and close it now. If
people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND
UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody
I'll probably run CentOS 7 until the end and then swap over to Debian when support starts
hurting me. What a pain.
Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview:
'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing
CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/
I'm a current user of old school CentOS, so keep your promise, Mr CTO.
That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds.
In other words, "nothing changes for current users of CentOS."
From the same article: 'To be exact, CentOS Stream is an upstream development platform for
ecosystem developers. It will be updated several times a day. This is
not a production operating system. It's purely a developer's distro.'
Read again: CentOS Stream is not a production operating system. 'Nuff
said.
This makes my decision to go with Ansible and CentOS 8 in our enterprise simple.
Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My
company is dumping IBM software everywhere - this means we need to dump CentOS now
too.
Ironic, and it puts those of us who have recently migrated many of our development
serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers
yet -- and now that's never going to happen.
I can't believe what IBM is actually doing. This is a direct move against all that
open source means. They want to do exactly the same thing they're doing with awx (vs. ansible
tower). You're going against everything that stands for open source. And on top of that you
choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your
part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a
waste...
What IBM fails to understand is that many of us who use CentOS for personal projects
also work for corporations that spend millions of dollars annually on products from companies
like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.
This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses
from its customers.. while not taking into account the fact that Red Hat's strong adoption
into the enterprise is a direct consequence of engineers using the nonproprietary version to
develop things at home in their spare time.
Having an open source, non support contract version of your OS is exactly what
drives adoption towards the supported version once the business decides to put something into
production.
They are choosing to kill the golden goose in order to get the next few eggs faster.
IBM doesn't care about anything but its large enterprise customers. Very stereotypically
IBM.
So sad.
Not only breaking the support promise but so quickly (2021!)
Business wise, a lot of business software is providing CentOS packages and support.
Like hosting panels, backup software, virtualization, Management. I mean A LOT of money
worldwide is in dark waters now with this announcement. It took years for CentOS to appear in
their supported and tested distros. It will disappear now much faster.
Community wise, this is plain bad news for Open Source and all Open Source
communities. This is sad. I wonder, are open source developers nowadays happy to spend so
many hours for something that will in the end benefit IBM "subscribers" only in the end? I
don't think they are.
I don't want to give up on CentOS but this is a strong life changing decision. My
background is linux engineering with over 15+ years of hardcore experience. CentOS has always
been my go to when an organization didn't have the appetite for RHEL and the $75 a year
license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations
I've been with successfully. I can't, won't fight off any more and start advocating for
Ubuntu or pure Debian moving forward.
RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?
Hoping that stabbing Open Source community in the back, will make it switch to
commercial licenses is absolutely preposterous. This shows how disconnected they're from
reality and consumed by greed and it will simply backfire on them, when we switch to Debian
or any other LTS alternative. I can't think moving everything I so caressed and loved to a
mess like Ubuntu.
Assinine. This is completely ridiculous. I have migrated several servers from CentOS
7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers,
CentOS internal. This type of change should absolutely have been announced for CentOS 9. This
is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal.
One year to move everything??? Stupid.
Now I'm going to be looking at a couple of other options but it won't be RHEL after
this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You
will be losing my RHEL money once I chose and migrate. I get companies exist to make money
and that's fine. This though is purely a naked money grab that betrays an established
timeline and is about to force massive work on lots of people in a tiny timeframe saying "f
you customers.". You will no longer get my money for doing that to me
In hind sight it's clear to see that the only reason RHEL took over CentOS was to
kill the competition.
This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for
Non-production and Production Servers and had already begun deployments. Now I'm left in
situation of finding a new Linux distribution for our enterprise while I sweat out the last
few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat
lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the
next few years.
Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize
that a lot of Admins that used Centos at home and in the enterprise also advocated and drove
sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and
even if you were to take back your announcement, trust has been broken and the blowback will
ultimately mean the death of CentOS and reduced sales of RHEL. There is however an
opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize
on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn
move people/corporations to the Suse platform which in turn would drive sale for
SLES.
So the inevitable has come to pass, what was once a useful Distro will disappear
like others have. Centos was handy for education and training purposes and production when
you couldn't afford the fees for "support", now it will just be a shadow of
Fedora.
This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all
mirroring of your software.
For the software (CentOS) I hope for a quick death that will not drag on for
years.
This is a bit sad. There was always a conflict of interest associated with Redhat
managing the Centos project and this is the end result of this conflict of interest.
There is
a genuine benefit associated with the existence of Centos for Redhat however it would appear
that that benefit isn't great enough and some arse clown thought that by forcing users to
migrate it will increase Redhat's revenue.
The reality is that someone will repackage Redhat
and make it just like Centos. The only difference is that Redhat now live in the same camp as
Oracle.
Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a
stop to that. Even if CentOS backtracks on this decision because of community backlash, the
reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the
enterprise. Congratulations!
I am senior system admin in my organization which spends millions dollar a year on
RH&IBM products. From tomorrow, I will do my best to convince management to minimize our
spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM
products under my watch.
Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was
IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will
happen with our CentOS installations. What's wrong with IBM?
Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community
revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time,
but you cannot fool all the people all the time.
Even though RedHat/CentOS has a very large share of the Linux server market, it will
suffer the same fate as Novell (had 85% of the matket), disappearing into darkness
!
As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the
ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life,
before they become another ordinary dept of IBM, producing IBM Linux.
CentOS is dead. Time to
either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it
up.
I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for
several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've
spent the last 6 months deploying.
Congrats fellas, you did it. You perfected the transition to Debian from
CentOS.
I find it kind of funny, I find it kind of sad.
The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for
replacement to are the..
Wait. How could one with all the seriousness consider cutting down
already published EOL a good idea?
I literally had to convince people to move from Ubuntu and
Debian installations to CentOS for sake of stability and longer support, just for become
looking like a clown now, because with single move distro deprived from both of
this.
Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you
*lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you
made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a
Trumpian vacuum of wealth.
I have been using CentOS for over 10 years and one of the things I loved about it was how
stable it has been. Now, instead of being a stable release, it is changing to the beta
testing ground for RHEL 8.
And instead of 10 years of a support you need to update to the latest dot release. This
has me, very concerned.
well, 10 years - have you ever contributed with anything for the CentOS community, or paid
them a wage or at least donated some decent hardware for development or maybe just being
parasite all the time and now are you surprised that someone has to buy it's your own lunches
for a change?
If you think you might have done it even better why not take RH sources and make your own
FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for
free?
That's ridiculous. RHEL has benefitted from the free testing and corner case usage of
CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free.
That is the benefit of opening the core of your product.
You are missing a very important point. Goal of CentOS project was to rebuild RHEL,
nothing else. If money was the problem, they could have asked for donations and it would be
clear is there can be financial support for rebuild or not.
Putting entire community in front of done deal is disheartening and no one will trust
Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS
board, who can trust their integrity after this fiasco?
This is a breach of trust from the already published timeline of CentOS 8 where the EOL
was May 2029. One year's notice for such a massive change is unacceptable.
This! People already started deploying CentOS 8 with the expectation of 10 years of
updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which
is a big ask for systems deployed in the field.
I am considering creating another rebuild of RHEL and may even be able to hire some
people for this effort. If you are interested in helping, please join the HPCng slack (link
on the website hpcng.org).
This sounds like a great idea and getting control away from corporate entities like IBM
would be helpful. Have you considered reviving the Scientific Linux project?
Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both
server and desktop environments. I've built and maintained some RPMs for some private
projects that used CentOS as foundation. I can contribute compute and storage resources. I
can program in a few different languages.
Thank you for considering starting another RHEL rebuild. If and when you do, please
consider making your new website a Brave Verified Content Creator. I earn a little bit of
money every month using the Brave browser, and I end up donating it to Wikipedia every month
because there are so few Brave Verified websites.
The verification process is free, and takes about 15 to 30 minutes. I believe that the
Brave browser now has more than 8 million users.
Wikipedia. The so called organization that get tons of money from tech oligarchs and
yet the whine about we need money and support? (If you don't believe me just check their
biggest donors) also they keen to be insanely biased and allow to write on their web whoever
pays the most... Seriously, find other organisation to donate your money
Not sure what I could do but I will keep an eye out things I could help with. This change
to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production
environment in the last year.
LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good
and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was
expected, search the net for comments back in 2014.
Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from
Amazon Web Services (AWS). It provides a secure, stable, and high performance execution
environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get
an application environment that offers long term support with access to the latest innovations
in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.
Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic
Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual
machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft
Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development
and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that
enable easy integration with AWS. AWS provides ongoing security and maintenance updates for
Amazon Linux 2.
"... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
"... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
"... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
"... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
"... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
"... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
"... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.
Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community
for numerous bug reports, bug fixes, and howto writeups.
Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work
for.
The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes,
and growing interest from developers.
Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.
Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the
community.
Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less
products available without the need of self supporting RPM builds.
This will make RHEL less and less widely used by startups, enthusiasts and others.
CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.
Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening,
but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.
As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.
Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.
This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware
vendor support too, but with a lesser lifecycle.
I think you destroyed a large part of the RHEL / CentOS community with this move today.
Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more
uncertain.
IBM have a history of taking over companies and turning them into junk, so I am not that
surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat
acquisition.
Notable quotes:
"... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
"... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
I'm far from alone. By W3Tech 's count,
while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two
with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.
If you think you just realized why Red Hat might want to remove CentOS from the server
playing field, you're far from the first to think that.
Red Hat will continue to support CentOS 7 and produce it through the remainder of the
RHEL 7 life
cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024
I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.
The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I
missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all
trademarks first.
I don't think a rolling release distribution is what BMD wants.
My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.
We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100%
compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice
and DTrace.
But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your
systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or
can we?
We're putting Oracle Linux in your hands by doing two things:
We've made the Oracle Linux software available free of charge
We've created a simple script to switch your CentOS systems to Oracle Linux
We think you'll like what you find, and we'd love for you to give it a try.
FAQ
Wait, doesn't Oracle Linux cost money?
Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at
yum.oracle.com . Major releases, errata, the whole shebang. Free source
code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's
actually free. Seriously.
Is this just another CentOS?
Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications
will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux
far superior to CentOS.
How is this better than CentOS?
Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few
things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:
So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise
customers don't like to wait for updates -- and neither should you.
What about the code quality?
Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we
have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
What if I want support?
If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper
than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.
Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with
RHEL before anyone will talk to you.
Why are you doing this?
This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running
without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.
At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering
out there, and we want more people to experience it.
centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
What does the script do?
The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core
packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching,
but we recommend you do to take advantage of UEK.
Is it safe?
The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on
your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing
[email protected] .
IBM is messing up RedHat after the take over last year. This is the most unfortunate news
to the Free Open-Source community. Companies have been using CentOS as a testing bed before
committing to purchase RHEL subscription licenses.
We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.
You can use Oracle Linux in exactly the same way as you did CentOS except that you have
the option of buying support without reinstalling a "commercial" variant.
Everything's in the public repos except a few addons like ksplice. You don't even have to
go through the e-delivery to download the ISOs any more, they're all linked from
yum.oracle.com
Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for
their database software and in general purposes for Oracle Cloud Infrastructure.
Oracle customers would be even less thrilled about Streams than CentOS users. I hate to
admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base
if they don't do anything Oracle-ish, myself included.
I'll be pretty surprised if they don't completely destroy their own windfall opportunity,
though.
IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at
some point, Oracle attacks their product's users in the way IBM has here, then one can move
to Debian, but for now, it's a working solution, as CentOS no longer is.
You can use Oracle Linux exactly like CentOS, only better
Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more
trustworthy than RedHat/IBM. But I guess such things do happen with time..."
Notable quotes:
"... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Oracle Linux is free. The only thing that costs money is support for it. I quote
"Yes, we know that this is Oracle, but it's actually free.
Seriously."
In the first command, as an example, we used ' single quotes. This resulted in
our subshell command, inside the single quotes, to be interpreted as literal text instead of a
command. This is standard Bash: ' indicates literal, " indicates that
the string will be parsed for subshells and variables.
In the second command we swap the ' to " and thus the string is
parsed for actual commands and variables. The result is that a subshell is being started,
thanks to our subshell syntax ( $() ), and the command inside the subshell (
echo 'a' ) is being executed literally, and thus an a is produced,
which is then inserted in the overarching / top level echo . The command at that
stage can be read as echo "a" and thus the output is a .
In the third command, we further expand this to make it clearer how subshells work
in-context. We echo the letter b inside the subshell, and this is joined on the
left and the right by the letters a and c yielding the overall output
to be abc in a similar fashion to the second command.
In the fourth and last command, we exemplify the alternative Bash subshell syntax of using
back-ticks instead of $() . It is important to know that $() is the
preferred syntax, and that in some remote cases the back-tick based syntax may yield some
parsing errors where the $() does not. I would thus strongly encourage you to
always use the $() syntax for subshells, and this is also what we will be using in
the following examples.
Example 2: A little more complex
$ touch a
$ echo "-$(ls [a-z])"
-a
$ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-"
-=-||-rw-rw-r-- 1 roel roel 0 Sep 5 09:26 a||-=-
Here, we first create an empty file by using the touch a command. Subsequently,
we use echo to output something which our subshell $(ls [a-z]) will
generate. Sure, we can execute the ls directly and yield more or less the same
result, but note how we are adding - to the output as a prefix.
In the final command, we insert some characters at the front and end of the
echo command which makes the output look a bit nicer. We use a subshell to first
find the a file we created earlier ( ls [a-z] ) and then - still
inside the subshell - pass the results of this command (which would be only a
literally - i.e. the file we created in the first command) to the ls -l using the
pipe ( | ) and the xargs command. For more information on xargs,
please see our articles xargs for beginners with
examples and multi threaded xargs with
examples .
Example 3: Double quotes inside subshells and sub-subshells!
echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')"
it surely works
Cool, no? Here we see that double quotes can be used inside the subshell without generating
any parsing errors. We also see how a subshell can be nested inside another subshell. Are you
able to parse the syntax? The easiest way is to start "in the middle or core of all subshells"
which is in this case would be the simple echo "it works" .
This command will output it works as a result of the subshell call $(echo
"it works") . Picture it works in place of the subshell, i.e.
echo "$(echo "it works" | sed 's|it|it surely|')"
it surely works
This looks simpler already. Next it is helpful to know that the sed command
will do a substitute (thanks to the s command just before the |
command separator) of the text it to it surely . You can read the
sed command as replace __it__ with __it surely__. The output of the subshell
will thus be it surely works`, i.e.
echo "it surely works"
it surely works
Conclusion
In this article, we have seen that subshells surely work (pun intended), and that they can
be used in wide variety of circumstances, due to their ability to be inserted inline and within
the context of the overarching command. Subshells are very powerful and once you start using
them, well, there will likely be no stopping. Very soon you will be writing something like:
$ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'
This one is for you to try and play around with! Thank you and goodbye
Screen or as I like to refer to it "Admin's little helper" Screen is a window
manager that multiplexes a physical terminal between several processes
here are a couple quick reasons you'd might use screen
Lets say you have a unreliable internet connection you can use screen and if you get knocked
out from your current session you can always connect back to your session.
Or let's say you need more terminals, instead of opening a new terminal or a new tab just
create a new terminal inside of screen
Here are the screen shortcuts to help you on your way Screen shortcuts
and here are some of the Top 10 Awesome Linux Screen tips urfix.com uses all the time if not
daily.
1) Attach screen over ssh
ssh -t remote_host screen -r
Directly attach a remote screen session (saves a useless parent bash process)
This command starts screen with 'htop', 'nethogs' and 'iotop' in split-screen. You have to
have these three commands (of course) and specify the interface for nethogs – mine is
wlan0, I could have acquired the interface from the default route extending the command but
this way is simpler.
htop is a wonderful top replacement with many interactive commands and configuration
options. nethogs is a program which tells which processes are using the most bandwidth. iotop
tells which processes are using the most I/O.
The command creates a temporary "screenrc" file which it uses for doing the
triple-monitoring. You can see several examples of screenrc files here:
http://www.softpanorama.org/Utilities/Screen/screenrc_examples.shtml
4) Share a
'screen'-session
screen -x
Ater person A starts his screen-session with `screen`, person B can attach to the srceen of
person A with `screen -x`. Good to know, if you need or give support from/to others.
5)
Start screen in detached mode
screen -d -m [<command>]
Start screen in detached mode, i.e., already running on background. The command is optional,
but what is the purpose on start a blank screen process that way?
It's useful when invoking from a script (I manage to run many wget downloads in parallel, for
example).
6) Resume a detached screen session, resizing to fit the current terminal
screen -raAd.
By default, screen tries to restore its old window sizes when attaching to resizable
terminals. This command is the command-line equivalent to typing ^A F to fit an open screen
session to the window
7) use screen as a terminal emulator to connect to serial
consoles
screen /dev/tty<device> 9600
Use GNU/screen as a terminal emulator for anything serial console related.
screen /dev/tty
eg.
screen /dev/ttyS0 9600
8) ssh and attach to a screen in one line.
ssh -t user@host screen -x <screen name>
If you know the benefits of screen, then this might come in handy for you. Instead of
ssh'ing into a machine and then running a screen command, this can all be done on one line
instead. Just have the person on the machine your ssh'ing into run something like screen -S debug
Then you would run ssh -t user@host screen -x debug
and be attached to the same screen session.
491k
109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69
> ,
1
Christian Severin , 2017-09-29 09:47:52
You can use e.g. date --set='-2 years' to set the clock back two years, leaving
all other elements identical. You can change month and day of month the same way. I haven't
checked what happens if that calculation results in a datetime that doesn't actually exist,
e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both
date and time to concrete values" behaviour. – Christian Severin Sep 29 '17
at 9:47
Run that as root or under sudo . Changing only one of the year/month/day is
more of a challenge and will involve repeating bits of the current date. There are also GUI
date tools built in to the major desktop environments, usually accessed through the
clock.
To change only part of the time, you can use command substitution in the date string:
date -s "2014-12-25 $(date +%H:%M:%S)"
will change the date, but keep the time. See man date for formatting details to
construct other combinations: the individual components are %Y , %m
, %d , %H , %M , and %S .
There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)"
to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at
9:55
chaos , 2014-08-22 09:59:58
System time
You can use date to set the system date. The GNU implementation of
date (as found on most non-embedded Linux-based systems) accepts many different
formats to set the time, here a few examples:
set only the year:
date -s 'next year'
date -s 'last year'
set only the month:
date -s 'last month'
date -s 'next month'
set only the day:
date -s 'next day'
date -s 'tomorrow'
date -s 'last day'
date -s 'yesterday'
date -s 'friday'
set all together:
date -s '2009-02-13 11:31:30' #that's a magical timestamp
Hardware time
Now the system time is set, but you may want to sync it with the hardware clock:
Use --show to print the hardware time:
hwclock --show
You can set the hardware clock to the current system time:
hwclock --systohc
Or the system time to the hardware clock
hwclock --hctosys
> ,
2
garethTheRed , 2014-08-22 09:57:11
You change the date with the date command. However, the command expects a full
date as the argument:
# date -s "20141022 09:45"
Wed Oct 22 09:45:00 BST 2014
To change part of the date, output the current date with the date part that you want to
change as a string and all others as date formatting variables. Then pass that to the
date -s command to set it:
# date -s "$(date +'%Y12%d %H:%M')"
Mon Dec 22 10:55:03 GMT 2014
changes the month to the 12th month - December.
The date formats are:
%Y - Year
%m - Month
%d - Day
%H - Hour
%M - Minute
Balmipour , 2016-03-23 09:10:21
For ones like me running ESXI 5.1, here's what the system answered me
~ # date -s "2016-03-23 09:56:00"
date: invalid date '2016-03-23 09:56:00'
I had to uses a specific ESX command instead :
esxcli system time set -y 2016 -M 03 -d 23 -H 10 -m 05 -s 00
Hope it helps !
> ,
1
Brook Oldre , 2017-09-26 20:03:34
I used the date command and time format listed below to successfully set the date from the
terminal shell command performed on Android Things which uses the Linux Kernal.
Is Oracle A Real Alternative To CentOS? Home "
CentOS " Is Oracle A Real Alternative To CentOS? December 8,
2020 Frank Cox CentOS 33 Comments
Is Oracle a real alternative to
CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.
But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert
CentOS 7 to Oracle. Nothing about
CentOS 8 in that script, though.
That page seems to say that Oracle Linux is everything that
CentOS was prior to today's announcement.
But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.
So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume
that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like
CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?
Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.
$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ cat /etc/oracle-release Oracle Linux Server release 7.7
This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of
the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:
# lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6
Release: 7.6
Codename: n/a
#
KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell
you to use the Oracle Cloud. There's other things, but that stood out immediately.
That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently
fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle
Linux system.
Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gibovskis:
According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license
cost, no need for a contract, and no usage audits."
Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory
requirement. see: https://www.oracle.com/linux
No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra
commercial license. Same as in CentOS.
So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle
can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
--
Peter Huebner
We've gone over several things you can do with Ansible on your system, but we haven't yet
discussed how to provision a system. Here's an example of provisioning a virtual machine (VM)
with the OpenStack cloud solution.
- name: create a VM in openstack
osp_server:
name: cloudera-namenode
state: present
cloud: openstack
region_name: andromeda
image: 923569a-c777-4g52-t3y9-cxvhl86zx345
flavor_ram: 20146
flavor: big
auto_ip: yes
volumes: cloudera-namenode
All OpenStack modules start with os , which makes it easier to find them. The
above configuration uses the osp-server module, which lets you add or remove an instance. It
includes the name of the VM, its state, its cloud options, and how it authenticates to the API.
More information about cloud.yml
is available in the OpenStack docs, but if you don't want to use cloud.yml, you can use a
dictionary that lists your credentials using the auth option. If you want to
delete the VM, just change state: to absent .
Say you have a list of servers you shut down because you couldn't figure out how to get the
applications working, and you want to start them again. You can use
os_server_action to restart them (or rebuild them if you want to start from
scratch).
Here is an example that starts the server and tells the modules the name of the
instance:
Most OpenStack modules use similar options. Therefore, to rebuild the server, we can use the
same options but change the action to rebuild and add the
image we want it to use:
For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it
seemed to work best on my older hardware. The bootstrap YAML script is intended to take a
bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be
available over SSH and little else. Since a minimal OS install usually contains very little
that is useful to Ansible, I use the following to hit one host and prompt me to log in with
privilege escalation:
The script makes use of Ansible's raw module to set some base
requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control
account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap
completes, everything should be in place to have this node fully participate in my larger
Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there
is interest, I'll write another article on this topic).
The account YAML setup script is used to set up (or reset) user accounts for each family
member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of
machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could
have set up Network Information Service or LDAP authentication, but the number of accounts I
have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found
especially useful for this:
---
- name : Set user accounts
hosts : all
gather_facts : false
become : yes
vars_prompt :
- name : passwd
prompt : "Enter the desired ansible password:"
private : yes
tasks :
- name : Add child 1 account
user :
state : present
name : child1
password : "{{ passwd | password_hash('sha512') }}"
comment : Child One
uid : 888
group : users
shell : /bin/bash
generate_ssh_key : yes
ssh_key_bits : 2048
update_password : always
create_home : yes
The vars_prompt section prompts me for a password, which is put to a Jinja2 transformation
to produce the desired password hash. This means I don't need to hardcode passwords into the
YAML file and can run it to change passwords as needed.
The software installation YAML file is still evolving. It includes a base set of utilities
for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the
same graphical user interface (GUI) interface and all the same programs, games, and media files
are installed on each machine. Here is a small excerpt of the software for my young
children:
- name : Install kids software
apt :
name : "{{ packages }}"
state : present
vars :
packages :
- lxde
- childsplay
- tuxpaint
- tuxtype
- pysycache
- pysiogame
- lmemory
- bouncy
I created these three Ansible scripts using a virtual machine. When they were perfect, I
tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same
minimal Debian install then ran the bootstrap, accounts, and software configurations. Both
systems then functioned identically.
For a while, both sisters enjoyed their respective computers, comparing usage and exploring
software features.
The moment of truth
A few weeks later came the inevitable. My older daughter finally came to the conclusion that
her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real
estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor
Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not
do.
I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded
the RAM. Another perfect example of breathing new life into old hardware.
I pulled my Ansible scripts from source control, and everything I needed was right there:
bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific
software installation information. But details like account UIDs and all the packages to
install were all clearly documented and ready for use. While I surely could have figured it out
by looking at my other machines, there was no need to spend the time! Ansible had it all
clearly laid out in YAML.
Not only was the YAML documentation valuable, but Ansible's automation made short work of
the new install. The minimal Debian OS install from USB stick took about 15 minutes. The
subsequent shape up of the system using Ansible for end-user deployment only took another nine
minutes. End-user acceptance testing was successful, and a new era of computing calmness was
brought to my family (other parents will understand!).
Conclusion
Taking the time to learn and practice Ansible with this exercise showed me the true value of
its automation and documentation abilities. Spending a few hours figuring out the specifics for
the first example saves time whenever I need to provision or fix a machine. The YAML is clear,
easy to read, and -- thanks to Ansible's idempotency -- easy to test and refine over time. When
I have new ideas or my children have new requests, using Ansible to control a local virtual
machine for testing is a valuable time-saving tool.
Doing sysadmin tasks in your free time can be fun. Spending the time to automate and
document your work pays rewards in the future; instead of needing to investigate and relearn a
bunch of things you've already solved, Ansible keeps your work documented and ready to apply so
you can move onto other, newer fun things!
Ansible works by connecting to nodes and sending small programs called modules to be
executed remotely. This makes it a push architecture, where configuration is pushed from
Ansible to servers without agents, as opposed to the pull model, common in agent-based
configuration management systems, where configuration is pulled.
These modules are mapped to resources and their respective states , which are
represented in YAML files. They enable you to manage virtually everything that has an
API, CLI, or configuration file you can interact with, including network devices like load
balancers, switches, firewalls, container orchestrators, containers themselves, and even
virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private
(e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system
configuration.
With Ansible's batteries-included model, hundreds of modules are included and any task in a
playbook has a module behind it.
The contract for building modules is simple: JSON in the stdout . The
configurations declared in YAML files are delivered over the network via SSH/WinRM -- or any
other connection plugin -- as small scripts to be executed in the target server(s). Modules can
be written in any language capable of returning JSON, although most Ansible modules (except for
Windows PowerShell) are written in Python using the Ansible API (this eases the development of
new modules).
Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic
inventories and plugins, can also increase Ansible's power. It's important to know about them
so you know when to use one instead of the other.
Plugins are divided into several categories with distinct goals, like Action, Cache,
Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:
Connection plugins: These implement a way to communicate with servers in your inventory
(e.g., SSH, WinRM, Telnet); in other words, how automation code is transported over the
network to be executed.
Filters plugins: These allow you to manipulate data inside your playbook. This is a
Jinja2 feature that is harnessed by Ansible to solve infrastructure-as-code problems.
Lookup plugins: These fetch data from an external source (e.g., env, file, Hiera,
database, HashiCorp Vault).
Although many modules are delivered with Ansible, there is a chance that your problem is not
yet covered or it's something too specific -- for example, a solution that might make sense
only in your organization. Fortunately, the official docs provide excellent guidelines on
developing
modules .
IMPORTANT: Before you start working on something new, always check for open pull requests,
ask developers at #ansible-devel (IRC/Freenode), or search the development list and/or existing
working groups to see if a
module exists or is in development.
Signs that you need a new module instead of using an existing one include:
Conventional configuration management methods (e.g., templates, file, get_url,
lineinfile) do not solve your problem properly.
You have to use a complex combination of commands, shells, filters, text processing with
magic regexes, and API calls using curl to achieve your goals.
Your playbooks are complex, imperative, non-idempotent, and even non-deterministic.
In the ideal scenario, the tool or service already has an API or CLI for management, and it
returns some sort of structured data (JSON, XML, YAML).
Identifying good and bad
playbooks
"Make love, but don't make a shell script in YAML."
So, what makes a bad playbook?
- name : Read a remote resource
command : "curl -v http://xpto/resource/abc"
register : resource
changed_when : False
- name : Create a resource in case it does not exist
command : "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta,
pattern: *.* } }'"
when : "resource.stdout | 404"
# Leave it here in case I need to remove it hehehe
#- name: Remove resource
# command: "curl -X DELETE http://xpto/resource/abc"
# when: resource.stdout == 1
Aside from being very fragile -- what if the resource state includes a 404 somewhere? -- and
demanding extra code to be idempotent, this playbook can't update the resource when its state
changes.
Playbooks written this way disrespect many infrastructure-as-code principles. They're not
readable by human beings, are hard to reuse and parameterize, and don't follow the declarative
model encouraged by most configuration management tools. They also fail to be idempotent and to
converge to the declared state.
Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration
management tools to increase your speed, they have the same problems as an imperative
automation approach based on scripts and command execution. This creates a scenario where
you're using Ansible just as a means to deliver your old scripts, copying what you already have
into YAML files.
Here's how to rewrite this example to follow infrastructure-as-code principles.
- name :
XPTO
xpto :
name : abc
state : present
config :
client : xyz
url : http://beta
pattern : "*.*"
The benefits of this approach, based on custom modules , include:
It's declarative -- resources are properly represented in YAML.
It's idempotent.
It converges from the declared state to the current state .
It's readable by human beings.
It's easily parameterized or reused.
Implementing a custom module
Let's use WildFly , an open source
Java application server, as an example to introduce a custom module for our not-so-good
playbook:
JBoss-CLI returns plaintext in a JSON-like syntax; therefore, this approach is very
fragile, since we need a type of parser for this notation. Even a seemingly simple parser can
be too complex to treat many exceptions .
JBoss-CLI is just an interface to send requests to the management API (port 9990).
Sending an HTTP request is more efficient than opening a new JBoss-CLI session,
connecting, and sending a command.
It does not converge to the desired state; it only creates the resource when it doesn't
exist.
A custom module for this would look like:
- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
driver-name : h2
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
jndi-name : "java:jboss/datasources/DemoDS"
user-name : sa
password : sa
min-pool-size : 20
max-pool-size : 40
This playbook is declarative, idempotent, more readable, and converges to the desired state
regardless of the current state.
Why learn to build custom modules?
Good reasons to learn how to build custom modules include:
Improving existing modules
You have bad playbooks and want to improve them, or
You don't, but want to avoid having bad playbooks.
Knowing how to build a module considerably improves your ability to debug problems in
playbooks, thereby increasing your productivity.
" abstractions save us time working, but they don't save us time learning." -- Joel Spolsky,
The Law of
Leaky Abstractions
Custom Ansible modules 101
JSON (JavaScript Object Notation) in stdout : that's the contract!
They can be written in any language, but
Python is usually the best option (or the second best)
Most modules delivered with Ansible ( lib/ansible/modules ) are written in Python and
should support compatible versions.
The Ansible way
First step:
git clone https://github.com/ansible/ansible.git
Navigate in lib/ansible/modules/ and read the existing modules code.
Your tools are: Git, Python, virtualenv, pdb (Python debugger)
For comprehensive instructions, consult the
official docs .
An alternative: drop it in the library directory library/ # if any custom modules,
put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
roles/
common/ # this hierarchy represents a "role"
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
It's easier to start.
Doesn't require anything besides Ansible and your favorite IDE/text editor.
This is your best option if it's something that will be used internally.
TIP: You can use this directory layout to overwrite existing modules if, for example, you
need to patch a module.
First steps
You could do it in your own -- including using another language -- or you could use the
AnsibleModule class, as it is easier to put JSON in the stdout ( exit_json() ,
fail_json() ) in the way Ansible expects ( msg , meta , has_changed , result ), and it's also
easier to process the input ( params[] ) and log its execution ( log() , debug() ).
module = AnsibleModule ( argument_spec = arguments , supports_check_mode = True )
try :
if module. check_mode :
# Do not do anything, only verifies current state and report it
module. exit_json ( changed = has_changed , meta = result , msg = 'Fez alguma coisa ou
não...' )
if module. params [ 'state' ] == 'present' :
# Verify the presence of a resource
# Desired state `module.params['param_name'] is equal to the current state?
module. exit_json ( changed = has_changed , meta = result )
if module. params [ 'state' ] == 'absent' :
# Remove the resource in case it exists
module. exit_json ( changed = has_changed , meta = result )
NOTES: The check_mode ("dry run") allows a playbook to be executed or just verifies if
changes are required, but doesn't perform them. Also, the module_utils directory can be used
for shared code among different modules.
The Ansible codebase is heavily tested, and every commit triggers a build in its continuous
integration (CI) server, Shippable , which includes
linting, unit tests, and integration tests.
For integration tests, it uses containers and Ansible itself to perform the setup and verify
phase. Here is a test case (written in Ansible) for our custom module's sample code:
- name
: Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
...
register : result
- name : assert output message that datasource was created
assert :
that :
- "result.changed == true"
- "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" An alternative: bundling
a module with your role
How to spin up your infrastructure: e.g., Vagrant, Docker, OpenStack, EC2
How to verify your infrastructure tests: Testinfra and Goss
But your tests would have to be written using pytest with Testinfra or Goss, instead of
plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about
using Molecule
.
tyberious 5 hours ago
They were trying to overcome the largest Electoral and Popular Vote Victory in American
History!
It was akin to dousing a 4 alarm fire with a garden hose, eventually you will get burned!
play_arrow 95 play_arrow pocomotion 5 hours ago
Tyberious, thanks for sharing your thoughts. I think you are correct is your assessment.
play_arrow 15 play_arrow 1 systemsplanet 4 hours ago
I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't
matter.
The game is rigged y_arrow 1 Stainmaker 4 hours ago
I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't
matter.
Of course it doesn't matter when you have Lester Holt interviewing Joe Biden and asking
whether Creepy Joe's administration is going to continue investigating Trump. Whatever happened
to Hunter's laptop and the hundreds of millions in Russian, Ukrainian & Chinese bribes
anyway? 7 play_arrow HelluvaEngineer 5 hours ago
So far, they are winning. Got an idea of a path to victory? I don't. Americans are fvcking
stupid. play_arrow 24 play_arrow 1 tyberious 5 hours ago
Secure shell (SSH) is at the heart of Ansible, at least for almost everything besides
Windows. Key (no pun intended) to using SSH efficiently with Ansible is keys ! Slight aside -- there are a lot of
very cool things you can do for security with SSH keys. It's worth perusing the authorized_keys
section of the sshd manual page
. Managing SSH keys can become laborious if you're getting into the realms of granular user
access, and although we could do it with either of my next two favourites, I prefer to use the
module because it
enables easy management through variables .
Besides the obvious function of placing a file somewhere, the file module also sets
ownership and permissions. I'd say that's a lot of bang for your buck with one module.
I'd proffer a substantial portion of security relates to setting permissions too, so the file
module plays nicely with authorized_keys .
There are so many ways to manipulate the contents of files, and I see lots of folk use
lineinfile . I've
used it myself for small tasks. However, the template module is so much clearer because you
maintain the entire file for context. My preference is to write Ansible content in such a way
that anyone can understand it easily -- which to me means not making it hard to
understand what is happening. Use of template means being able to see the entire file you're
putting into place, complete with the variables you are using to change pieces.
Many modules in the current distribution leverage Ansible as an orchestrator. They talk to
another service, rather than doing something specific like putting a file into place. Usually,
that talking is over HTTP too. In the days before many of these modules existed, you
could program an API directly using the uri module. It's a powerful access tool,
enabling you to do a lot. I wouldn't be without it in my fictitious Ansible shed.
The joker card in our pack. The Swiss Army Knife. If you're absolutely stuck for how to
control something else, use shell . Some will argue we're now talking about making Ansible a
Bash script -- but, I would say it's still better because with the use of the name parameter in
your plays and roles, you document every step. To me, that's as big a bonus as anything. Back
in the days when I was still consulting, I once helped a database administrator (DBA) migrate
to Ansible. The DBA wasn't one for change and pushed back at changing working methods. So, to
ease into the Ansible way, we called some existing DB management scripts from Ansible using the
shell module. With an informative name statement to accompany the task.
You can ac hieve a lot with these five modules. Yes, modules designed to do a specific task
will make your life even easier. But with a smidgen of engineering simplicity, you can achieve
a lot with very little. Ansible developer Brian Coca is a master at it, and his tips and tricks talk is always
worth a watch.
10 Ansible modules for Linux system automationThese handy modules save time and
hassle by automating many of your daily tasks, and they're easy to implement with a few
commands. 26 Oct 2020 Ricardo Gerardi (Red Hat) Feed 69
up 3 comments Image by : Opensource.com x Subscribe now
Ansible is a complete
automation solution for your IT environment. You can use Ansible to automate Linux and Windows
server configuration, orchestrate service provisioning, deploy cloud environments, and even
configure your network devices.
Ansible modules
abstract actions on your system so you don't need to worry about implementation details. You
simply describe the desired state, and Ansible ensures the target system matches it.
This module availability is one of Ansible's main benefits, and it is often referred to as
Ansible having "batteries included." Indeed, you can find modules for a great number of tasks,
and while this is great, I frequently hear from beginners that they don't know where to
start.
Although your choice of modules will depend exclusively on your requirements and what you're
trying to automate with Ansible, here are the top ten modules you need to get started with
Ansible for Linux system automation.
1. copy
The
copy module allows you to copy a file from the Ansible control node to the target hosts. In
addition to copying the file, it allows you to set ownership, permissions, and SELinux labels
to the destination file. Here's an example of using the copy module to copy a "message of the
day" configuration file to the target hosts:
- name: Ensure MOTD file is in place
copy:
src: files / motd
dest: / etc / motd
owner: root
group: root
mode: 0644
For less complex content, you can copy the content directly to the destination file without
having a local file, like this:
- name: Ensure MOTD file is in place
copy:
content: "Welcome to this system."
dest: / etc / motd
owner: root
group: root
mode: 0644
This module works
idempotently , which means it will only copy the file if the same file is not already in
place with the same content and permissions.
The copy module is a great option to copy a small number of files with static content. If
you need to copy a large number of files, take a look at the
synchronize module. To copy files with dynamic content, take a look at the
template module next.
2. template
The
template module works similarly to the copy module, but it processes content
dynamically using the Jinja2 templating
language before copying it to the target hosts.
For example, define a "message of the day" template that displays the target system name,
like this:
$ vi templates / motd.j2
Welcome to {{ inventory_hostname }} .
Then, instantiate this template using the template module, like this:
-
name: Ensure MOTD file is in place
template:
src: templates / motd.j2
dest: / etc / motd
owner: root
group: root
mode: 0644
Before copying the file, Ansible processes the template and interpolates the variable,
replacing it with the target host system name. For example, if the target system name is
rh8-vm03 , the result file is:
Welcome to rh8-vm03.
While the copy module can also interpolate variables when using the
content parameter, the template module allows additional flexibility
by creating template files, which enable you to define more complex content, including
for loops, if conditions, and more. For a complete reference, check
Jinja2
documentation .
This module is also idempotent, and it will not copy the file if the content on the target
system already matches the template's content.
3. user
The user
module allows you to create and manage Linux users in your target system. This module has many
different parameters, but in its most basic form, you can use it to create a new user.
For example, to create the user ricardo with UID 2001, part of the groups
users and wheel , and password mypassword , apply the
user module with these parameters:
Notice that this module tries to be idempotent, but it cannot guarantee that for all its
options. For instance, if you execute the previous module example again, it will reset the
password to the defined value, changing the user in the system for every execution. To make
this example idempotent, use the parameter update_password: on_create , ensuring
Ansible only sets the password when creating the user and not on subsequent runs.
You can also use this module to delete a user by setting the parameter state:
absent .
The user module has many options for you to manage multiple user aspects. Make
sure you take a look at the module documentation for more information.
4. package
The package
module allows you to install, update, or remove software packages from your target system using
the operating system standard package manager.
For example, to install the Apache web server on a Red Hat Linux machine, apply the module
like this:
- name: Ensure Apache package is installed
package:
name: httpd
state: present More on Ansible
This module is distribution agnostic, and it works by using the underlying package
manager, such as yum/dnf for Red Hat-based distributions and apt for
Debian. Because of that, it only does basic tasks like install and remove packages. If you need
more control over the package manager options, use the specific module for the target
distribution.
Also, keep in mind that, even though the module itself works on different distributions, the
package name for each can be different. For instance, in Red Hat-based distribution, the Apache
web server package name is httpd , while in Debian, it is apache2 .
Ensure your playbooks deal with that.
This module is idempotent, and it will not act if the current system state matches the
desired state.
5. service
Use the service
module to manage the target system services using the required init system; for example,
systemd
.
In its most basic form, all you have to do is provide the service name and the desired
state. For instance, to start the sshd service, use the module like this:
-
name: Ensure SSHD is started
service:
name: sshd
state: started
You can also ensure the service starts automatically when the target system boots up by
providing the parameter enabled: yes .
As with the package module, the service module is flexible and
works across different distributions. If you need fine-tuning over the specific target init
system, use the corresponding module; for example, the module systemd .
Similar to the other modules you've seen so far, the service module is also
idempotent.
6. firewalld
Use the firewalld
module to control the system firewall with the firewalld daemon on systems that
support it, such as Red Hat-based distributions.
For example, to open the HTTP service on port 80, use it like this:
- name: Ensure port
80 ( http ) is open
firewalld:
service: http
state: enabled
permanent: yes
immediate: yes
You can also specify custom ports instead of service names with the port
parameter. In this case, make sure to specify the protocol as well. For example, to open TCP
port 3000, use this:
- name: Ensure port 3000 / TCP is open
firewalld:
port: 3000 / tcp
state: enabled
permanent: yes
immediate: yes
You can also use this module to control other firewalld aspects like zones or
complex rules. Make sure to check the module's documentation for a comprehensive list of
options.
7. file
The file
module allows you to control the state of files and directories -- setting permissions,
ownership, and SELinux labels.
For instance, use the file module to create a directory /app owned
by the user ricardo , with read, write, and execute permissions for the owner and
the group users :
You can also use this module to set file properties on directories recursively by using the
parameter recurse: yes or delete files and directories with the parameter
state: absent .
This module works with idempotency for most of its parameters, but some of them may make it
change the target path every time. Check the documentation for more details.
8.
lineinfile
The lineinfile
module allows you to manage single lines on existing files. It's useful to update targeted
configuration on existing files without changing the rest of the file or copying the entire
configuration file.
For example, add a new entry to your hosts file like this:
You can also use this module to change an existing line by applying the parameter
regexp to look for an existing line to replace. For example, update the
sshd_config file to prevent root login by modifying the line PermitRootLogin
yes to PermitRootLogin no :
Note: Use the service module to restart the SSHD service to enable this change.
This module is also idempotent, but, in case of line modification, ensure the regular
expression matches both the original and updated states to avoid unnecessary changes.
9.
unarchive
Use the unarchive
module to extract the contents of archive files such as tar or zip
files. By default, it copies the archive file from the control node to the target machine
before extracting it. Change this behavior by providing the parameter remote_src:
yes .
For example, extract the contents of a .tar.gz file that has already been
downloaded to the target host with this syntax:
Some archive technologies require additional packages to be available on the target system;
for example, the package unzip to extract .zip files.
Depending on the archive format used, this module may or may not work idempotently. To
prevent unnecessary changes, you can use the parameter creates to specify a file
or directory that this module would create when extracting the archive contents. If this file
or directory already exists, the module does not extract the contents again.
10.
command
The command
module is a flexible one that allows you to execute arbitrary commands on the target system.
Using this module, you can do almost anything on the target system as long as there's a command
for it.
Even though the command module is flexible and powerful, it should be used with
caution. Avoid using the command module to execute a task if there's another appropriate module
available for that. For example, you could create users by using the
command module to execute the useradd command, but you should
use the user module instead, as it abstracts many details away from you, taking
care of corner cases and ensuring the configuration only changes when necessary.
For cases where no modules are available, or to run custom scripts or programs, the
command module is still a great resource. For instance, use this module to run a
script that is already present in the target machine:
- name: Run the app installer
command: "/app/install.sh"
By default, this module is not idempotent, as Ansible executes the command every single
time. To make the command module idempotent, you can use when
conditions to only execute the command if the appropriate condition exists, or the
creates argument, similarly to the unarchive module example.
What's
next?
Using these modules, you can configure entire Linux systems by copying, templating, or
modifying configuration files, creating users, installing packages, starting system services,
updating the firewall, and more.
If you are new to Ansible, make sure you check the documentation on how to create
playbooks to combine these modules to automate your system. Some of these tasks require
running with elevated privileges to work. For more details, check the privilege escalation
documentation.
As of Ansible 2.10, modules are organized in collections. Most of the modules in this list
are part of the ansible.builtin collection and are available by default with
Ansible, but some of them are part of other collections. For a list of collections, check the
Ansible documentation
. What
you need to know about Ansible modules Learn how and when to develop custom modules for
Ansible.
Here, for example, is a fragment from an old collection of hardening scripts called Titan,
written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but
probably not optimal choice, unless your primary editor is VIM.
FixHostsEquiv() {
if -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
/bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG
if grep -s "^+$" /etc/hosts.equiv
then
ed - /etc/hosts.equiv <<- !
g/^+$/d
w
q
!
fi
else
t_echo 2 " No /etc/hosts.equiv - PASSES CHECK"
exit 1
fi
For VIM/Emacs users the main benefit here is that you will know your editor better,
instead of inventing/learning "yet another tool." That actually also is an argument against
Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to
kill a bird with a cannon. Positive return on investment probably starts if you manage over 8
or even 16 boxes.
Perl also can be used. But I would recommend to slurp the file into an array and operate
with lines like in editor; a regex on the whole text are more difficult to write correctly
then a regex for a line, although experts have no difficulties using just them. But we seldom
acquire skills we can so without :-)
On the other hand, that gives you a chance to learn splice function ;-)
If the files are basically identical and need some slight customization you can use
patch utility with pdsh, but you need to learn the ropes. Like Perl the patch
utility was also written by Larry Wall and is a very flexible tool for such tasks. You need
first to collect files from your servers into some central directory with pdsh/pdcp (which I
think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one
server to which you already applied the change (diff is your command language at this point),
verify that on other server that diff produced right results, apply it and then distribute
the resulting files back to each server using again pdsh/pdcp. If you have a common
NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the
tree and diffs on common filesystem.
The same central repository of config files can be used with vi and other approaches
creating "poor man Ansible" for you .
I am surprised that Perl is No.3. It should be no.1 as it is definitely superior to both
shell and Python for the most sysadmin scripts and it has more commonality with bash (which
remain the major language) than Python. Far more.
It looks like Python as the language taught at the universities dominate because number of
weak sysadmin, who just mentions it but actually do not used it, exceed the number of strong
sysadmins (who really wrote at least one complex sysadmin script) by several orders of
magnitude.
What's
your favorite programming/scripting language for sysadmin work?
Life as a systems engineer is a process of continuous improvement. In the past few years, as
software-defined-everything has started to overhaul how we work in the same way virtualization
did, knowing how to write and debug software has been a critical skill for systems engineers.
Whether you are automating a small, repetitive, manual task, writing daily reporting tools, or
debugging a production outage, it is vital to choose the right tool for the job. Below, are a
few programming languages that I think all systems engineers will find useful, and also some
guidance for picking your next language to learn.
Bash
The old standby, Bash (and, to a certain extent, POSIX sh) is the go-to for many systems
engineers. The quick access to system primitives makes it ideal for ad-hoc data
transformations. Slap together curl and jq with some conditionals,
and you've got everything from a basic health check to an automated daily reporting tool.
However, once you get a few levels of iteration deep, or you're making multiple calls to
jq , you probably want to pull out a more fully-featured programming
language.
Python
Python's easy onboarding, wide range of libraries, and large community make it ideal for
more demanding sysadmin tasks. Daily reports might start as a few hundred lines of Bash that
are run first thing in the morning. Once this gets large enough, however, it makes sense to
move this to Python. A quick import json for simple JSON object interaction, and
import jinja2 for quickly templating out a daily HTML-formatted email.
The
languages your tools are built in
One of the powers of open source is, of course, access to the source! However, it is hard to
realize this value if you don't have an understanding of the languages these tools are built
in. An understanding of Go makes digging into the Datadog or Kubernetes codebases much easier.
Being familiar with the development and debugging tools for C and Perl allow you to quickly dig
down into aberrant behavior.
The new hotness
Even if you don't have Go or Rust in your environment today, there's a good chance you'll
start seeing these languages more often. Maybe your application developers are migrating over
to Elixir. Keeping up with the evolution of our industry can frequently feel like a treadmill,
but this can be mitigated somewhat by getting ahead of changes inside of your organization.
Keep an ear to the ground and start learning languages before you need them, so you're always
prepared.
Assume I have a file with a lot of IP addresses and want to operate on those IP addresses.
For example, I want to run dig to retrieve reverse-DNS information for the IP
addresses listed in the file. I also want to skip IP addresses that start with a comment (# or
hashtag).
I'll use fileA as an example. Its contents are:
10.10.12.13 some ip in dc1
10.10.12.14 another ip in dc2
#10.10.12.15 not used IP
10.10.12.16 another IP
I could copy and paste each IP address, and then run dig manually:
$> dig +short -x 10.10.12.13
Or I could do this:
$> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfile
What if I want to swap the columns in fileA? For example, I want to put IP addresses in the
right-most column so that fileA looks like this:
some ip in dc1 10.10.12.13
another ip in dc2 10.10.12.14
not used IP #10.10.12.15
another IP 10.10.12.16
I run:
$> while read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA
If you have worked in system administration for a while, you've probably run into a system
administrator who doesn't write anything down and keeps their work a closely-guarded secret.
When I've run into administrators like this, I often ask why they do this, and the response is
usually a joking, "Job security." Which, may not actually be all that joking.
Don't be that person. I've worked in several shops, and I have yet to see someone "work
themselves out of a job." What I have seen, however, is someone that can't take a week off
without being called by the team repeatedly. Or, after this person left, I have seen a team
struggle to detangle the mystery of what that person was doing, or how they were managing
systems under their control.
Designed to run
from cron. Uses different, simpler approach than the
etckeeper.
Does not use GIT or any other version control system as they proved to be of questionable utility , unless there are multiple
sysadmins on the server.
Designed to run from cron. Uses different, simpler approach than the etckeeper (and does not have the connected with the usage
of GIT problem with incorrect assignment of file attributes when reconverting system files).
If it detects changed file it creates a new tar file for each analyzed directory. For example /etc, /root,
and /boot
Detects all "critical" changed file, diffs them with previous version, and produces report.
All information by default in stored in /var/Dirhist_base. Directories to watch and files that are considered
important are configurable via two config files dirhist_ignore.lst and dirhist_watch.lst which by default are
located at the root of the /var/Dirhist_base tree ( as /var/Dirhist_base/dirhist_ignore.lst and
/var/Dirhist_base/dirhist_watch.lst )
You can specify any number of watched directories and within each directory any number of watched files and subdirectories.
The format used is similar to YAML dictionaries, or Windows 3 ini files. If any of "watched" files or directories changes, the
utility can email you the report to selected email addresses, to alert about those changes. Useful when several sysadmin manage
the same server. Can also be used for checking, if changes made were documented in GIT or other version management system (this
process can be automated using the utility admpolice.)
Ansible has no notion of state. Since it doesn't keep track of dependencies, the tool
simply executes a sequential series of tasks, stopping when it finishes, fails or encounters an
error . For some, this simplistic mode of automation is desirable; however, many prefer
their automation tool to maintain an extensive catalog for ordering (à la Puppet),
allowing them to reach a defined state regardless of any variance in environmental
conditions.
YAML Ain't a Markup Language (YAML), and as configuration formats go, it's easy on the eyes.
It has an intuitive visual structure, and its logic is pretty simple: indented bullet points
inherit properties of parent bullet points.
It's easy (and misleading) to think of YAML as just a list of related values, no more
complex than a shopping list. There is a heading and some items beneath it. The items below the
heading relate directly to it, right? Well, you can test this theory by writing a little bit of
valid YAML.
Open a text editor and enter this text, retaining the dashes at the top of the file and the
leading spaces for the last two items:
---
Store: Bakery
Sourdough loaf
Bagels
Save the file as example.yaml (or similar).
If you don't already have yamllint installed, install it:
$ sudo dnf install -y yamllint
A linter is an application that verifies the syntax of a file. The
yamllint command is a great way to ensure your YAML is valid before you hand it
over to whatever application you're writing YAML for (Ansible, for instance).
Use yamllint to validate your YAML file:
$ yamllint --strict shop.yaml || echo "Fail"
$
But when converted to JSON with a simple converter script , the data structure of
this simple YAML becomes clearer:
Parsed without the visual context of line breaks and indentation, the actual scope of your
data looks a lot different. The data is mostly flat, almost devoid of hierarchy. There's no
indication that the sourdough loaf and bagels are children of the name of the store.
Sequence: values listed in a specific order. A sequence starts with a dash and a space (
- ). You can think of a sequence as a Python list or an array in Bash or
Perl.
Mapping: key and value pairs. Each key must be unique, and the order doesn't matter.
Think of a Python dictionary or a variable assignment in a Bash script.
There's a third type called scalar , which is arbitrary data (encoded in
Unicode) such as strings, integers, dates, and so on. In practice, these are the words and
numbers you type when building mapping and sequence blocks, so you won't think about these any
more than you ponder the words of your native tongue.
When constructing YAML, it might help to think of YAML as either a sequence of sequences or
a map of maps, but not both.
YAML mapping blocks
When you start a YAML file with a mapping statement, YAML expects a series of mappings. A
mapping block in YAML doesn't close until it's resolved, and a new mapping block is explicitly
created. A new block can only be created either by increasing the indentation level (in
which case, the new block exists inside the previous block) or by resolving the previous
mapping and starting an adjacent mapping block.
The reason the original YAML example in this article fails to produce data with a hierarchy
is that it's actually only one data block: the key Store has a single value of
Bakery Sourdough loaf Bagels . YAML ignores the whitespace because no new mapping
block has been started.
Is it possible to fix the example YAML by prepending each sequence item with a dash and
space?
---
Store: Bakery
- Sourdough loaf
- Bagels
Again, this is valid YAML, but it's still pretty flat:
The problem is that this YAML file opens a mapping block and never closes it. To close the
Store block and open a new one, you must start a new mapping. The value of
the mapping can be a sequence, but you need a key first.
As you can see, this YAML directive contains one mapping ( Store ) to two child
values ( Bakery and Cheesemonger ), each of which is mapped to a
child sequence.
YAML sequence blocks
The same principles hold true should you start a YAML directive as a sequence. For instance,
this YAML directive is valid:
Flour
Water
Salt
Each item is distinct when viewed as JSON:
["Flour", "Water", "Salt"]
But this YAML file is not valid because it attempts to start a mapping block at an
adjacent level to a sequence block :
---
- Flour
- Water
- Salt
Sugar: caster
It can be repaired by moving the mapping block into the sequence:
---
- Flour
- Water
- Salt
- Sugar: caster
You can, as always, embed a sequence into your mapping item:
---
- Flour
- Water
- Salt
- Sugar:
- caster
- granulated
- icing
Viewed through the lens of explicit JSON scoping, that YAML snippet reads like this:
If you want to comfortably write YAML, it's vital to be aware of its data structure. As you
can tell, there's not much you have to remember. You know about mapping and sequence blocks, so
you know everything you need have to work with. All that's left is to remember how they do and
do not interact with one another. Happy coding! Check out these related articles on Enable
Sysadmin Image 10 YAML
tips for people who hate YAML
This article describes the
different parts of an Ansible playbook starting with a very broad overview of what Ansible is and how
you can use it. Ansible is a way to use easy-to-read YAML syntax to write playbooks that can automate
tasks for you. These playbooks can range from very simple to very complex and one playbook can even be
embedded in another.
Now that you have that base
knowledge let's look at a basic playbook that will install the
httpd
package.
I have an inventory file with two hosts specified, and I placed them in the
web
group:
Let's look at the actual
playbook to see what it contains:
[root@ansible test]# cat httpd.yml
---
- name: this playbook will install httpd
hosts: web
tasks:
- name: this is the task to install httpd
yum:
name: httpd
state: latest
Breaking this down, you see
that the first line in the playbook is
---
.
This lets you know that it is the beginning of the playbook. Next, I gave a name for the play. This is
just a simple playbook with only one play, but a more complex playbook can contain multiple plays.
Next, I specify the hosts that I want to target. In this case, I am selecting the
web
group,
but I could have specified either
ansibleclient.usersys.redhat.com
or
ansibleclient2.usersys.redhat.com
instead
if I didn't want to target both systems. The next line tells Ansible that you're going to get into the
tasks that do the actual work. In this case, my playbook has only one task, but you can have multiple
tasks if you want. Here I specify that I'm going to install the
httpd
package.
The next line says that I'm going to use the
yum
module.
I then tell it the name of the package,
httpd
,
and that I want the latest version to be installed.
When I run the
httpd.yml
playbook
twice, I get this on the terminal:
[root@ansible test]# ansible-playbook httpd.yml
PLAY [this playbook will install httpd] ************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]
TASK [this is the task to install httpd] ***********************************************************************************************************
changed: [ansibleclient2.usersys.redhat.com]
changed: [ansibleclient.usersys.redhat.com]
PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansibleclient2.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@ansible test]# ansible-playbook httpd.yml
PLAY [this playbook will install httpd] ************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]
TASK [this is the task to install httpd] ***********************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]
PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansibleclient2.usersys.redhat.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@ansible test]#
Note that in both cases, I
received an
ok=2
, but in the
second run of the playbook, nothing was changed. The latest version of
httpd
was
already installed at that point.
To get information about the
various modules you can use in a playbook, you can use the
ansible-doc
command.
For example:
[root@ansible test]# ansible-doc yum
> YUM (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/yum.py)
Installs, upgrade, downgrades, removes, and lists packages and groups with the `yum' package manager. This module only works on Python 2. If you require Python 3 support, see the [dnf] module.
* This module is maintained by The Ansible Core Team
* note: This module has a corresponding action plugin.
< output truncated >
It's nice to have a playbook
that installs
httpd
, but to make
it more flexible, you can use variables instead of hardcoding the package as
httpd
.
To do that, you could use a playbook like this one:
[root@ansible test]# cat httpd.yml
---
- name: this playbook will install {{ myrpm }}
hosts: web
vars:
myrpm: httpd
tasks:
- name: this is the task to install {{ myrpm }}
yum:
name: "{{ myrpm }}"
state: latest
Here you can see that I've
added a section called "vars" and I declared a variable
myrpm
with
the value of
httpd
. I then can
use that
myrpm
variable in the playbook and adjust it to
whatever I want to install. Also, because I've specified the RPM to install by using a variable, I can
override what I have written in the playbook by specifying the variable on the command line by using
-e
:
[root@ansible test]# cat httpd.yml
---
- name: this playbook will install {{ myrpm }}
hosts: web
vars:
myrpm: httpd
tasks:
- name: this is the task to install {{ myrpm }}
yum:
name: "{{ myrpm }}"
state: latest
[root@ansible test]# ansible-playbook httpd.yml -e "myrpm=at"
PLAY [this playbook will install at] ***************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************
ok: [ansibleclient.usersys.redhat.com]
ok: [ansibleclient2.usersys.redhat.com]
TASK [this is the task to install at] **************************************************************************************************************
changed: [ansibleclient2.usersys.redhat.com]
changed: [ansibleclient.usersys.redhat.com]
PLAY RECAP *****************************************************************************************************************************************
ansibleclient.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansibleclient2.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root@ansible test]#
Another way to make the tasks
more dynamic is to use
loops
. In this snippet, you can see
that I have declared
rpms
as a
list to have
mailx
and
postfix
.
To use them, I use
loop
in
my task:
vars:
rpms:
- mailx
- postfix
tasks:
- name: this will install the rpms
yum:
name: "{{ item }}"
state: installed
loop: "{{ rpms }}"
You might have noticed that
when these plays run, facts about the hosts are gathered:
These facts can be used as variables when you run the play. For example, you could have a
motd.yml
file
that sets content like:
"This is the system {{ ansible_facts['fqdn'] }}.
This is a {{ ansible_facts['distribution'] }} version {{ ansible_facts['distribution_version'] }} system."
For any system where you run
that playbook, the correct fully-qualified domain name (FQDN), operating system distribution, and
distribution version would get set, even without you manually defining those variables.
I currently work as a
Solutions Architect at Red Hat. I have been here for going on 14 years, moving around a bit over
the years, working in front line support and consulting before my current role. In my free time,
I enjoy spending time with my family, exercising, and woodworking.
More
about me
Related Content
Image
Tricks and treats for
sysadmins and ops
Are you ready for the scary technology tricks that can haunt you as a sysadmin? Here
are five treats to counter those tricks.
Posted:
October 30, 2020
Author:
Bryant
Son
(Red
Hat, Sudoer)
Image
Eight ways to protect SSH
access on your system
The Secure Shell is a critical tool in the administrator's arsenal. Here are eight ways
you can better secure SSH, and some suggestions for basic SSH centralization.
Posted:
October 29, 2020
Author:
Damon
Garn
Image
Linux command basics: printf
Use printf to format text or numbers.
Posted:
October 27, 2020
Author:
Tyler
Carrigan
(Red
Hat)
Useful for provisioning multiple servers that use traditional authentication, and not LDAP and for synchronizing user
accounts between multiple versions on Linux . Also can be used for "normalizing" servers after acquisition of another
company, changing on the fly UID and GID on multiple servers, etc. Can also be used for provisioning computational nodes on
small and medium HPC clusters that use traditional authentication instead of DHCP.
Useful for transfer of sets of trees with huge files over WAN links. In case of huge archives they can be split into chunks of
a certain size, for example 5TB and orgnized into a directory. Files are sorted into N piles, where N is specified as
parameter, and each pile transmitted vi own TCP connection. useful for transmitted over WAN lines with high latency. I
achieved on WAN links with 100Ms latency results comparable with Aspera using 8 channels of transmission.
Useful for large RAID5 arrays without spare drive, or other RAID configurations with limited redundancy and critical
data stored. Currently works with Dell DRAC only, which should be configured for passwordless ssh login from the server that runs
this utility. Detects that a disk in RAID 5 array failed, informs the most recent users (default is those who login during the
two months), and then shuts down the server, if not cancelled after the "waiting" period (default is five days).
The utility lists all users who were inactive for the specified
number of days (default is 365). Calculates I-nodes usage too. Can execute simple commands for each dormant user (lock
or delete account) and generates a text file with the list (one user per line) that can be used for more complex operations.
So I started writing simple code in a file that could be interpreted by perl to make the
changes for me with one command per line:
uc mail_owner # "uc" is the command for
"uncomment" uc hostname cv hostname {{fqdn}} # "cv" is the command for "change value", {{fqdn}
+ } is replaced with appropriate value ...[download]
You get the idea. I started writing some code to interpret my config file modification
commands and then realized someone had to have tackled this problem before. I did a search on
metacpan but came up empty. Anyone familiar with this problem space and can help point me in
the right direction?
by likbez on Oct 05, 2020
at 03:16 UTC Reputation: 2
There are also some newer editors that use LUA as the scripting language, but none with
Perl as a scripting language. See
https://www.slant.co/topics/7340/~open-source-programmable-text-editors
Here, for example, is a fragment from an old collection of hardening scripts called Titan,
written for Solaris by Brad M. Powell. Example below uses vi which is the simplest, but
probably not optimal choice, unless your primary editor is VIM.
FixHostsEquiv() { if [
-f /etc/hosts.equiv -a -s /etc/hosts.equiv ]; then t_echo 2 " /etc/hosts.equiv exists and is
not empty. Saving a co + py..." /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG if grep -s
"^+$" /etc/hosts.equiv then ed - /etc/hosts.equiv <<- ! g/^+$/d w q ! fi else t_echo 2
" No /etc/hosts.equiv - PASSES CHECK" exit 1 fi[download]
For VIM/Emacs users the main benefit here is that you will know your editor better,
instead of inventing/learning "yet another tool." That actually also is an argument against
Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to
kill a bird with a cannon. Positive return on investment probably starts if you manage over 8
or even 16 boxes.
Perl also can be used. But I would recommend to slurp the file into an array and operate
with lines like in editor; a regex on the whole text are more difficult to write correctly
then a regex for a line, although experts have no difficulties using just them. But we seldom
acquire skills we can so without :-)
On the other hand, that gives you a chance to learn splice function ;-)
If the files are basically identical and need some slight customization you can use
patch utility with pdsh, but you need to learn the ropes. Like Perl the patch
utility was also written by Larry Wall and is a very flexible tool for such tasks. You need
first to collect files from your servers into some central directory with pdsh/pdcp (which I
think is a standard RPM on RHEL and other linuxes) or other tool, then to create diffs with
one server to which you already applied the change (diff is your command language at this
point), verify that on other server that this diff produced right results, apply it and then
distribute the resulting files back to each server using again pdsh/pdcp. If you have a
common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both
the tree and diffs on common filesystem.
The same central repository of config files can be used with vi and other approaches
creating "poor man Ansible" for you .
Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019
Red Hat Enterprise
Linux 8 comes with
modules as a packaging concept that allows system administrators to select the desired
software version from multiple packaged versions. This article will show you how to manage Perl
as a module.
Installing from a default stream
Let's install Perl:
# yum --allowerasing install perl
Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Installing:
perl x86_64 4:5.26.3-416.el8 rhel-8.0.z-appstream 72 k
Installing dependencies:
[ ]
Transaction Summary
==========================================================================================
Install 147 Packages
Total download size: 21 M
Installed size: 59 M
Is this ok [y/N]: y
[ ]
perl-threads-shared-1.58-2.el8.x86_64
Complete!
Next, check which Perl you have:
$ perl -V:version
version='5.26.3';
You have 5.26.3 Perl version. This is the default version supported for the next 10 years
and, if you are fine with it, you don't have to know anything about modules. But what if you
want to try a different version?
Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and
ebooks on next-generation application development.
Let's find out what Perl modules are available using the yum module list
command:
# yum module list
Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
[ ]
Name Stream Profiles Summary
[ ]
parfait 0.5 common Parfait Module
perl 5.24 common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d] common [d] SQLite DBI driver
perl-DBI 1.641 [d] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called
streams in the modularity world, and they denote an independent variant, usually a
different version, of the same software stack. The [d] flag marks a default stream.
That means if you do not explicitly enable a different stream, the default one will be used.
That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.
Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux
7, which was running in the rh-perl524software collection
environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24
on Red Hat Enterprise Linux 8.
Enabling a Stream
First, switch the Perl module to the 5.24 stream:
# yum module enable perl:5.24
Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
Problems in request:
Modular dependency problems with Defaults:
Problem 1: conflicting requests
- module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Problem 2: conflicting requests
- module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Enabling module streams:
perl 5.24
Transaction Summary
==========================================================================================
Is this ok [y/N]: y
Complete!
Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
for details)
Here you can see a warning that the freeradius:3.0 stream is not compatible with
perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules
are compatible with all other modules.
Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is
another warning about installed packages. The last warning means that the system still can have
installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.
Changing modules and changing packages are two separate phases. You can fix it by
synchronizing a distribution content like this:
# yum --allowerasing distrosync
Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
Modular dependency problems:
Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
- conflicting requests
Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
- conflicting requests
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
[ ]
Downgrading:
perl x86_64 4:5.24.4-403.module+el8+2770+c759b41a
rhel-8.0.z-appstream 6.1 M
[ ]
Transaction Summary
==========================================================================================
Upgrade 69 Packages
Downgrade 66 Packages
Total download size: 20 M
Is this ok [y/N]: y
[ ]
Complete!
And try the perl command again:
$ perl -V:version
version='5.24.4';
Great! It works. We switched to a different Perl version, and the different Perl is still
invoked with the perl command and is installed to a standard path (
/usr/bin/perl ). No scl enable incantation is needed, in contrast to the
software collections.
You could notice the repeated warning about FreeRADIUS. A future YUM update is going to
clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are
compatible with any Perl stream.
Dependent modules
Let's say the old application mentioned before is using DBD::SQLite Perl module.
(This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has
modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN
modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN
module, so give a try:
# yum --allowerasing install 'perl(DBD::SQLite)'
[ ]
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Installing:
perl-DBD-SQLite x86_64 1.58-1.module+el8+2519+e351b2a7 rhel-8.0.z-appstream 186 k
Installing dependencies:
perl-DBI x86_64 1.641-2.module+el8+2701+78cee6b5 rhel-8.0.z-appstream 739 k
Enabling module streams:
perl-DBD-SQLite 1.58
perl-DBI 1.641
Transaction Summary
==========================================================================================
Install 2 Packages
Total download size: 924 k
Installed size: 2.3 M
Is this ok [y/N]: y
[ ]
Installed:
perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64
Complete!
Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM
package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some
dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the
streams and installing the packages.
Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the
Modularity modules and compare it with what you saw the first time:
# yum module list
[ ]
parfait 0.5 common Parfait Module
perl 5.24 [e] common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d][e] common [d] SQLite DBI driver
perl-DBI 1.641 [d][e] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26,
which would otherwise be a default one ( [d] ). Other enabled Modularity modules are
perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite.
These two modules have no other streams.
In general, any module can have multiple streams. At most, one stream of a module can be the
default one. And, at most, one stream of a module can be enabled. An enabled stream takes
precedence over a default one. If there is no enabled or a default stream, content of the
module is unavailable.
If, for some reason, you need to disable a stream, even a default one, you do that with
yum module disable MODULE:STREAM command.
Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN
module now. Let's create a test database, a foo table inside with one textual
column called bar , and let's store a row with Hello text there:
Next, verify the Hello string was indeed stored by querying the database:
$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello
It seems DBD::SQLite works.
Non-modular packages may not work with non-default
streams
So far, everything is great and working. Now I will show what happens if you try to install
an RPM package that has not been modularized and is thus compatible only with the default Perl,
perl:5.26:
# yum --allowerasing install 'perl(LWP)'
[ ]
Error:
Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
- cannot install the best candidate for the job
- package perl-libs-4:5.26.3-416.el8.i686 is excluded
- package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Yum will report an error about perl-libwww-perl RPM package being incompatible. The
LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and
therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the
packages from perl:5.26 stream are masked and become unavailable. However, this masking does
not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that
were not modularized yet. If you need some of them to be available and compatible with a
non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to
contact Red Hat support team
with your request.
Resetting a module
Let's say you tested your old application and now you want to find out if it works with the
new Perl 5.26.
To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from
an enabled stream back to a default or to a yet another non-default stream is not
straightforward. You'll need to perform a module reset:
# yum module reset perl
[ ]
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Resetting module streams:
perl 5.24
Transaction Summary
==========================================================================================
Is this ok [y/N]: y
Complete!
Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24
RPM packages with 5.26 ones:
# yum --allowerasing distrosync
[ ]
Transaction Summary
==========================================================================================
Upgrade 65 Packages
Downgrade 71 Packages
Total download size: 22 M
Is this ok [y/N]: y
[ ]
After that, you can check the Perl version:
$ perl -V:version
version='5.26.3';
And, check the enabled modules:
# yum module list
[ ]
parfait 0.5 common Parfait Module
perl 5.24 common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d][e] common [d] SQLite DBI driver
perl-DBI 1.641 [d][e] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and
perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641
streams remained enabled. It does not matter much because those are the only streams.
Nonetheless, you can reset them back using yum module reset perl-DBI
perl-DBD-SQLite if you like.
Multi-context streams
What happened with the DBD::SQLite? It's still there and working:
$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello
That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls.
We call these modules multi-contextual . That's the case for perl-DBD-SQLite or
perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you
want to see these low-level details, such which contexts are available, which dependencies are
required, or which packages are contained in a module, you can use the yum module info
MODULE:STREAM command.
Afterword
I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise
Linux 8 that enables us to provide you with multiple versions of software on top of one Linux
platform. If you need more details, please read documentation accompanying the product (namely,
user-space component management document and yum(8) manual page ) or ask the support
team for help.
1 - Catchall for general errors. The exit code is 1 as the operation was not
successful.
2 - Misuse of shell builtins (according to Bash documentation)
126 - Command invoked cannot execute.
127 - "command not found".
128 - Invalid argument to exit.
128+n - Fatal error signal "n".
130 - Script terminated by Control-C.
255\* - Exit status out of range.
There is no "recipe" to get the meanings of an exit status of a given terminal command.
My first attempt would be the manpage:
user@host:~# man ls
Exit status:
0 if OK,
1 if minor problems (e.g., cannot access subdirectory),
2 if serious trouble (e.g., cannot access command-line argument).
Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use
values above 125 specially. 127 for command not found, 126 for command not executable. For more
information see the bash
exit codes .
Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs
(64-78) can be found in /usr/include/sysexits.h (or: man sysexits on
BSD):
0 /* successful termination */
64 /* base value for error messages */
64 /* command line usage error */
65 /* data format error */
66 /* cannot open input */
67 /* addressee unknown */
68 /* host name unknown */
69 /* service unavailable */
70 /* internal software error */
71 /* system error (e.g., can't fork) */
72 /* critical OS file missing */
73 /* can't create (user) output file */
74 /* input/output error */
75 /* temp failure; user is invited to retry */
76 /* remote error in protocol */
77 /* permission denied */
78 /* configuration error */
/* maximum listed value */
The above list allocates previously unused exit codes from 64-78. The range of unallotted
exit codes will be further restricted in the future.
However above values are mainly used in sendmail and used by pretty much nobody else, so
they aren't anything remotely close to a standard (as pointed by
@Gilles ).
In shell the exit status are as follow (based on Bash):
1 - 125 - Command did not complete successfully. Check the
command's man page for the meaning of the status, few examples below:
1 - Catchall for general errors
Miscellaneous errors, such as "divide by zero" and other impermissible operations.
Example:
$ let "var1 = 1/0"; echo $?
-bash: let: var1 = 1/0: division by 0 (error token is "0")
1
2 - Misuse of shell builtins (according to Bash documentation)
Missing keyword or command, or permission problem (and diff return code on a failed
binary file comparison).
Example:
empty_function() {}
6 - No such device or address
Example:
$ curl foo; echo $?
curl: (6) Could not resolve host: foo
6
128 - 254 - fatal error signal "n" - command died due to
receiving a signal. The signal code is added to 128 (128 + SIGNAL) to get the status
(Linux: man 7 signal , BSD: man signal ), few examples below:
130 - command terminated due to Ctrl-C being pressed, 130-128=2
(SIGINT)
Example:
$ cat
^C
$ echo $?
130
137 - if command is sent the KILL(9) signal (128+9), the exit
status of command otherwise
exit takes only integer args in the range 0 - 255.
Example:
$ sh -c 'exit 3.14159'; echo $?
sh: line 0: exit: 3.14159: numeric argument required
255
According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings,
and should therefore be avoided for user-specified exit parameters.
Please note that out of range exit values can result in unexpected exit codes (e.g. exit
3809 gives an exit code of 225, 3809 % 256 = 225).
You will have to look into the code/documentation. However the thing that comes closest to a
"standardization" is errno.h share improve this answer
follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold
badge 17 17 silver badges 25 25 bronze badges
PSkocik ,
thanks for pointing the header file.. tried looking into the documentation of a few utils..
hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at
9:13
Use
the Bash shell in Linux to manage foreground and background processes. You can use Bash's job control functions and
signals to give you more flexibility in how you run commands. We show you how.
How to Speed Up a Slow PC
https://imasdk.googleapis.com/js/core/bridge3.401.2_en.html#goog_863166184
All About Processes
Whenever a program is executed in a Linux or Unix-like operating
system, a process is started. "Process" is the name for the internal representation of the executing program in the
computer's memory. There is a process for every active program. In fact, there is a process for nearly everything that
is running on your computer. That includes the components of your
graphical
desktop environment
(GDE) such as
GNOME
or
KDE
,
and system
daemons
that
are launched at start-up.
Why
nearly
everything
that is running? Well, Bash built-ins such as
cd
,
pwd
,
and
alias
do
not need to have a process launched (or "spawned") when they are run. Bash executes these commands within the instance
of the Bash shell that is running in your terminal window. These commands are fast precisely because they don't need to
have a process launched for them to execute. (You can type
help
in
a terminal window to see the list of Bash built-ins.)
Processes can be running in the foreground, in which case they take
over your terminal until they have completed, or they can be run in the background. Processes that run in the background
don't dominate the terminal window and you can continue to work in it. Or at least, they don't dominate the terminal
window if they don't generate screen output.
A Messy Example
We'll start a simple
ping
trace
running
. We're going to
ping
the
How-To Geek domain. This will execute as a foreground process.
ping www.howtogeek.com
We get the expected results, scrolling down the terminal window. We
can't do anything else in the terminal window while
ping
is
running. To terminate the command hit
Ctrl+C
.
Ctrl+C
The visible effect of the
Ctrl+C
is
highlighted in the screenshot.
ping
gives
a short summary and then stops.
Let's repeat that. But this time we'll hit
Ctrl+Z
instead
of
Ctrl+C
.
The task won't be terminated. It will become a background task. We get control of the terminal window returned to us.
ping www.howtogeek.com
Ctrl+Z
The visible effect of hitting
Ctrl+Z
is
highlighted in the screenshot.
This time we are told the process is stopped. Stopped doesn't mean
terminated. It's like a car at a stop sign. We haven't scrapped it and thrown it away. It's still on the road,
stationary, waiting to go. The process is now a background
job
.
The
jobs
command
will
list the jobs
that have been started in the current terminal session. And because jobs are (inevitably) processes,
we can also use the
ps
command
to see them. Let's use both commands and compare their outputs. We'll use the
T
option
(terminal) option to only list the processes that are running in this terminal window. Note that there is no need to use
a hyphen
-
with
the
T
option.
jobs
ps T
The
jobs
command
tells us:
[1]
:
The number in square brackets is the job number. We can use this to refer to the job when we need to control it with
job control commands.
+
:
The plus sign
+
shows
that this is the job that will be acted upon if we use a job control command without a specific job number. It is
called the default job. The default job is always the one most recently added to the list of jobs.
Stopped
:
The process is not running.
ping
www.howtogeek.com
: The command line that launched the process.
The
ps
command
tells us:
PID
:
The process ID of the process. Each process has a unique ID.
TTY
:
The pseudo-teletype (terminal window) that the process was executed from.
STAT
:
The status of the process.
TIME
:
The amount of CPU time consumed by the process.
COMMAND
:
The command that launched the process.
These are common values for the STAT column:
D
:
Uninterruptible sleep. The process is in a waiting state, usually waiting for input or output, and cannot be
interrupted.
I
:
Idle.
R
:
Running.
S
:
Interruptible sleep.
T
:
Stopped by a job control signal.
Z
:
A zombie process. The process has been terminated but hasn't been "cleaned down" by its parent process.
The value in the STAT column can be followed by one of these extra
indicators:
<
:
High-priority task (not nice to other processes).
N
:
Low-priority (nice to other processes).
L
:
process has pages locked into memory (typically used by real-time processes).
s
:
A session leader. A session leader is a process that has launched process groups. A shell is a session leader.
l
:
Multi-thread process.
+
:
A foreground process.
We can see that Bash has a state of
Ss
.
The uppercase "S" tell us the Bash shell is sleeping, and it is interruptible. As soon as we need it, it will respond.
The lowercase "s" tells us that the shell is a session leader.
The ping command has a state of
T
.
This tells us that
ping
has
been stopped by a job control signal. In this example, that was the
Ctrl+Z
we
used to put it into the background.
The
ps
T
command has a state of
R
,
which stands for running. The
+
indicates
that this process is a member of the foreground group. So the
ps
T
command is running in the foreground.
The bg Command
The
bg
command
is used to resume a background process. It can be used with or without a job number. If you use it without a job number
the default job is brought to the foreground. The process still runs in the background. You cannot send any input to it.
If we issue the
bg
command,
we will resume our
ping
command:
bg
The
ping
command
resumes and we see the scrolling output in the terminal window once more. The name of the command that has been
restarted is displayed for you. This is highlighted in the screenshot.
But we have a problem. The task is running in the background and
won't accept input. So how do we stop it?
Ctrl+C
doesn't
do anything. We can see it when we type it but the background task doesn't receive those keystrokes so it keeps pinging
merrily away.
In fact, we're now in a strange blended mode. We can type in the
terminal window but what we type is quickly swept away by the scrolling output from the
ping
command.
Anything we type takes effect in the foregound.
To stop our background task we need to bring it to the foreground
and then stop it.
The fg Command
The
fg
command
will bring a background task into the foreground. Just like the
bg
command,
it can be used with or without a job number. Using it with a job number means it will operate on a specific job. If it
is used without a job number the last command that was sent to the background is used.
If we type
fg
our
ping
command
will be brought to the foreground. The characters we type are mixed up with the output from the
ping
command,
but they are operated on by the shell as if they had been entered on the command line as usual. And in fact, from the
Bash shell's point of view, that is exactly what has happened.
fg
And now that we have the
ping
command
running in the foreground once more, we can use
Ctrl+C
to
kill it.
Ctrl+C
We Need to Send the Right Signals
That wasn't exactly pretty. Evidently running a process in the
background works best when the process doesn't produce output and doesn't require input.
But, messy or not, our example did accomplish:
Putting a process into the background.
Restoring the process to a running state in the background.
Returning the process to the foreground.
Terminating the process.
When you use
Ctrl+C
and
Ctrl+Z
,
you are sending signals to the process. These are
shorthand
ways
of using the
kill
command.
There are
64
different signals
that
kill
can
send. Use
kill
-l
at the command line to list them.
kill
isn't
the only source of these signals. Some of them are raised automatically by other processes within the system
Here are some of the commonly used ones.
SIGHUP
:
Signal 1. Automatically sent to a process when the terminal it is running in is closed.
SIGINT
:
Signal 2. Sent to a process you hit
Ctrl+C
.
The process is interrupted and told to terminate.
SIGQUIT
:
Signal 3. Sent to a process if the user sends a quit signal
Ctrl+D
.
SIGKILL
:
Signal 9. The process is immediately killed and will not attempt to close down cleanly. The process does not go down
gracefully.
SIGTERM
: Signal
15. This is the default signal sent by
kill
.
It is the standard program termination signal.
SIGTSTP
: Signal
20. Sent to a process when you use
Ctrl+Z
.
It stops the process and puts it in the background.
We must use the
kill
command
to issue signals that do not have key combinations assigned to them.
Further Job Control
A process moved into the background by using
Ctrl+Z
is
placed in the stopped state. We have to use the
bg
command
to start it running again. To launch a program as a running background process is simple. Append an ampersand
&
to
the end of the command line.
Although it is best that background processes do not write to the
terminal window, we're going to use examples that do. We need to have something in the screenshots that we can refer to.
This command will start an endless loop as a background process:
while true; do echo "How-To Geek Loop
Process"; sleep 3; done &
We are told the job number and process ID id of the process. Our
job number is 1, and the process id is 1979. We can use these identifiers to control the process.
The output from our endless loop starts to appear in the terminal
window. As before, we can use the command line but any commands we issue are interspersed with the output from the loop
process.
ls
To stop our process we can use
jobs
to
remind ourselves what the job number is, and then use
kill
.
jobs
reports that our process is job number 1. To use that number with
kill
we
must precede it with a percent sign
%
.
jobs
kill %1
kill
sends the
SIGTERM
signal,
signal number 15, to the process and it is terminated. When the Enter key is next pressed, a status of the job is shown.
It lists the process as "terminated." If the process does not respond to the
kill
command
you can take it up a notch. Use
kill
with
SIGKILL
,
signal number 9. Just put the number 9 between the
kill
command
the job number.
kill 9 %1
Things We've Covered
Ctrl+C
:
Sends
SIGINT
,
signal 2, to the process -- if it is accepting input -- and tells it to terminate.
Ctrl+D
: Sends
SISQUIT
,
signal 3, to the process -- if it is accepting input -- and tells it to quit.
Ctrl+Z
: Sends
SIGSTP
,
signal 20, to the process and tells it to stop (suspend) and become a background process.
jobs
:
Lists the background jobs and shows their job number.
bg
job_number
:
Restarts a background process. If you don't provide a job number the last process that was turned into a background
task is used.
fg
job_number
:
brings a background process into the foreground and restarts it. If you don't provide a job number the last process
that was turned into a background task is used.
commandline
&
:
Adding an ampersand
&
to
the end of a command line executes that command as a background task, that is running.
kill %
job_number
:
Sends
SIGTERM
,
signal 15, to the process to terminate it.
kill 9
%
job_number
:
Sends
SIGKILL
,
signal 9, to the process and terminates it abruptly.
When you do this, the obvious result is that tmux launches a new shell in the same window
with a status bar along the bottom. There's more going on, though, and you can see it with this
little experiment. First, do something in your current terminal to help you tell it apart from
another empty terminal:
$ echo hello
hello
Now press Ctrl+B followed by C on your keyboard. It might look like your work has vanished,
but actually, you've created what tmux calls a window (which can be, admittedly,
confusing because you probably also call the terminal you launched a window ). Thanks to
tmux, you actually have two windows open, both of which you can see listed in the status bar at
the bottom of tmux. You can navigate between these two windows by index number. For instance,
press Ctrl+B followed by 0 to go to the initial window:
$ echo hello
hello
Press Ctrl+B followed by 1 to go to the first new window you created.
You can also "walk" through your open windows using Ctrl+B and N (for Next) or P (for
Previous).
The tmux trigger and commands More Linux resources
The keyboard shortcut Ctrl+B is the tmux trigger. When you press it in a tmux session, it
alerts tmux to "listen" for the next key or key combination that follows. All tmux shortcuts,
therefore, are prefixed with Ctrl+B .
You can also access a tmux command line and type tmux commands by name. For example, to
create a new window the hard way, you can press Ctrl+B followed by : to enter the tmux command
line. Type new-window and press Enter to create a new window. This does exactly
the same thing as pressing Ctrl+B then C .
Splitting windows into panes
Once you have created more than one window in tmux, it's often useful to see them all in one
window. You can split a window horizontally (meaning the split is horizontal, placing one
window in a North position and another in a South position) or vertically (with windows located
in West and East positions).
To create a horizontal split, press Ctrl+B followed by " (that's a double-quote).
To create a vertical split, press Ctrl+B followed by % (percent).
You can split windows that have been split, so the layout is up to you and the number of
lines in your terminal.
Sometimes things can get out of hand. You can adjust a terminal full of haphazardly split
panes using these quick presets:
Ctrl+B Alt+1 : Even horizontal splits
Ctrl+B Alt+2 : Even vertical splits
Ctrl+B Alt+3 : Horizontal span for the main pane, vertical splits for lesser panes
Ctrl+B Alt+3 : Vertical span for the main pane, horizontal splits for lesser panes
Ctrl+B Alt+5 : Tiled layout
Switching between panes
To get from one pane to another, press Ctrl+B followed by O (as in other ). The
border around the pane changes color based on your position, and your terminal cursor changes
to its active state. This method "walks" through panes in order of creation.
Alternatively, you can use your arrow keys to navigate to a pane according to your layout.
For example, if you've got two open panes divided by a horizontal split, you can press Ctrl+B
followed by the Up arrow to switch from the lower pane to the top pane. Likewise, Ctrl+B
followed by the Down arrow switches from the upper pane to the lower one.
Running a
command on multiple hosts with tmux
Now that you know how to open many windows and divide them into convenient panes, you know
nearly everything you need to know to run one command on multiple hosts at once. Assuming you
have a layout you're happy with and each pane is connected to a separate host, you can
synchronize the panes such that the input you type on your keyboard is mirrored in all
panes.
To synchronize panes, access the tmux command line with Ctrl+B followed by : , and then type
setw synchronize-panes .
Now anything you type on your keyboard appears in each pane, and each pane responds
accordingly.
Download our cheat sheet
It's relatively easy to remember Ctrl+B to invoke tmux features, but the keys that follow
can be difficult to remember at first. All built-in tmux keyboard shortcuts are available by
pressing Ctrl+B followed by ? (exit the help screen with Q ). However, the help screen can be a
little overwhelming for all its options, none of which are organized by task or topic. To help
you remember the basic features of tmux, as well as many advanced functions not covered in this
article, we've developed a tmux cheatsheet . It's free to
download, so get your copy today.
In this quick tutorial, I want to look at
the
jobs
command
and a few of the ways that we can manipulate the jobs running on our systems. In short, controlling jobs lets you
suspend and resume processes started in your Linux shell.
Jobs
The
jobs
command
will list all jobs on the system; active, stopped, or otherwise. Before I explore the command and output, I'll create
a job on my system.
I will use the
sleep
job
as it won't change my system in any meaningful way.
First, I issued the
sleep
command,
and then I received the
Job number
[1].
I
then immediately stopped the job by using
Ctl+Z
.
Next, I run the
jobs
command
to view the newly created job:
[tcarrigan@rhel ~]$ jobs
[1]+ Stopped sleep 500
You can see that I have a single stopped job
identified by the job number
[1]
.
Other options to know for this command
include:
-l - list PIDs in addition to default info
-n - list only processes that have changed since the last notification
-p - list PIDs only
-r - show only running jobs
-s - show only stopped jobs
Background
Next, I'll resume the
sleep
job
in the background. To do this, I use the
bg
command.
Now, the
bg
command
has a pretty simple syntax, as seen here:
bg [JOB_SPEC]
Where JOB_SPEC can be any of the following:
%n - where
n
is the job number
%abc - refers to a job started by a command beginning with
abc
%?abc - refers to a job started by a command containing
abc
%- - specifies the previous job
NOTE
:
bg
and
fg
operate
on the current job if no JOB_SPEC is provided.
I can move this job to the background by
using the job number
[1]
.
[tcarrigan@rhel ~]$ bg %1
[1]+ sleep 500 &
You can see now that I have a single running
job in the background.
[tcarrigan@rhel ~]$ jobs
[1]+ Running sleep 500 &
Foreground
Now, let's look at how to move a background
job into the foreground. To do this, I use the
fg
command.
The command syntax is the same for the foreground command as with the background command.
fg [JOB_SPEC]
Refer to the above bullets for details on
JOB_SPEC.
I have started a new
sleep
in
the background:
[tcarrigan@rhel ~]$ sleep 500 &
[2] 5599
Now, I'll move it to the foreground by using
the following command:
[tcarrigan@rhel ~]$ fg %2
sleep 500
The
fg
command
has now brought my system back into a sleep state.
The end
While I realize that the jobs presented here
were trivial, these concepts can be applied to more than just the
sleep
command.
If you run into a situation that requires it, you now have the knowledge to move running or stopped jobs from the
foreground to background and back again.
Open the terminal application and then start typing these commands to know your Linux desktop or cloud server/VM.
1. free get free and used memory
Are you running out of memory? Use the free command to show the total amount of free and used physical (RAM) and swap memory in
the Linux system. It also displays the buffers and caches used by the kernel:
free
# human readable outputs
free -h
# use the
cat
command
to find geeky details
cat /proc/meminfo
We can quickly probe for the hardware present in the Linux server or desktop:
# Find detailed info about the Linux box
hwinfo
# Show only a summary #
hwinfo --short
# View all disks #
hwinfo --disk
# Get an overview #
hwinfo --short --block
# Find a particular disk #
hwinfo --disk --only /dev/sda
hwinfo --disk --only /dev/sda
# Try 4 graphics card ports for monitor data #
hwprobe=bios.ddc.ports=4 hwinfo --monitor
# Limit info to specific devices #
hwinfo --short --cpu --disk --listmd --gfxcard --wlan --printer
Alternatively, you may find the lshw command and inxi command useful to display your Linux hardware information:
sudo lshw -short
inxi -Fxz
inxi
is system information tool to get system configurations and hardware. It shows system hardware, CPU, drivers, Xorg, Desktop,
Kernel, gcc version(s), Processes, RAM usage, and a wide variety of other useful information [Click to enlarge]
3. id know yourself
Display Linux user and group information for the given USER name. If user name omitted show information for the current user:
id
See who is logged on your Linux server:
who
who am i
4. lsblk list block storage devices
All Linux block devices give buffered access to hardware devices and allow reading and writing blocks as per configuration. Linux
block device has names. For example, /dev/nvme0n1 for NVMe and /dev/sda for SCSI devices such as HDD/SSD. But you don't have to
remember them. You can list them easily using the following syntax:
lsblk
# list only #
lsblk -l
# filter out loop devices using the
grep
command
#
lsblk -l | grep '^loop'
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
md0 9:0 0 3.7G 0 raid1 /boot
md1 9:1 0 949.1G 0 raid1
md1_crypt 253:0 0 949.1G 0 crypt
nixcraft-swap 253:1 0 119.2G 0 lvm [SWAP]
nixcraft-root 253:2 0 829.9G 0 lvm /
nvme1n1 259:0 0 953.9G 0 disk
nvme1n1p1 259:1 0 953M 0 part
nvme1n1p2 259:2 0 3.7G 0 part
nvme1n1p3 259:3 0 949.2G 0 part
nvme0n1 259:4 0 953.9G 0 disk
nvme0n1p1 259:5 0 953M 0 part /boot/efi
nvme0n1p2 259:6 0 3.7G 0 part
nvme0n1p3 259:7 0 949.2G 0 part
5. lsb_release Linux distribution information
Want to get distribution-specific information such as, description of the currently installed distribution, release number and
code name:
lsb_release -a
No LSB modules are available.
The lscpu command gathers and displays CPU architecture information in an easy-to-read format for humans including various CPU
bugs:
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
Stepping: 13
CPU MHz: 976.324
CPU max MHz: 4600.0000
CPU min MHz: 800.0000
BogoMIPS: 5199.98
Virtualization: VT-x
L1d cache: 192 KiB
L1i cache: 192 KiB
L2 cache: 1.5 MiB
L3 cache: 12 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Mitigation; TSX disabled
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_g
ood nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes x
save avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep
bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Cpu can be listed using the lshw command too:
sudo lshw -C cpu
7. lstopo display hardware topology
Want to see the topology of the Linux server or desktop? Try:
lstopo
lstopo-no-graphics
You will see information about:
NUMA memory nodes
shared caches
CPU packages
Processor cores
processor "threads" and more
8. lsusb list usb devices
We all use USB devices, such as external hard drives and keyboards. Run the NA command for displaying information about USB buses
in the Linux system and the devices connected to them.
lsusb
# Want a graphical summary of USB devices connected to the system? #
sudo usbview
usbview
provides a graphical summary of USB devices connected to the system. Detailed information may be displayed by selecting
individual devices in the tree display
lspci list PCI devices
We use the lspci command for displaying information about PCI buses in the system and devices connected to them:
lspci
9. timedatectl view current date and time zone
Typically we use the
date
command
to set or get date/time information on the CLI:
date
However, modern Linux distro use the timedatectl command to
query
and change the system
clock and its settings, and enable or disable time synchronization services (NTPD and co):
timedatectl
Local time: Sun 2020-07-26 16:31:10 IST
Universal time: Sun 2020-07-26 11:01:10 UTC
RTC time: Sun 2020-07-26 11:01:10
Time zone: Asia/Kolkata (IST, +0530)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
10. w who is logged in
Run the
w
command
on Linux to see information about the Linux users currently on the machine, and their processes:
$ w
Conclusion
And this concluded our ten Linux commands to know the system to increase your productivity quickly to solve problems. Let me know
about your favorite tool in the comment section below.
What is the value of this utility in comparison with "environment variables" package. Is this reinvention of the wheel?
It allow "perdirectory" ,envrc file which contain specific for this directory env viruables. So it is simply load them and we can do
this in bash without instlling this new utilitiy with uncler vale.
Did the author knew about existence of "environment modules" when he wrote it ?
direnv
is a nifty open-source extension
for your shell on a UNIX operating system such as Linux and macOS. It is compiled into a single static executable and supports
shells such as
bash
,
zsh
,
tcsh
,
and
fish
.
The main
purpose of
direnv
is to allow for
project-specific environment variables without cluttering
~/.profile
or
related shell startup files. It implements a new way to load and unload environment variables depending on the current directory.
It is used
to load
12factor
apps (a methodology for
building software-as-a-service apps) environment variables, create per-project isolated development environments, and also load
secrets for deployment. Additionally, it can be used to build multi-version installation and management solutions similar to
rbenv
,
pyenv
,
and
phpenv
.
So How Does direnv Works?
Before the
shell loads a command prompt,
direnv
checks
for the existence of a
.envrc
file
in the current (which you can display using the
pwd
command
) and parent directory. The checking process is swift and can't be noticed on each prompt.
Once it
finds the
.envrc
file
with the appropriate permissions, it loads it into a bash sub-shell and it captures all exported variables and makes them
available to the current shell.
... ... ...
How to Use direnv in Linux Shell
To
demonstrate how
direnv
works, we will
create a new directory called
tecmint_projects
and
move into it.
$ mkdir ~/tecmint_projects
$ cd tecmint_projects/
Next, let's
create a new variable called
TEST_VARIABLE
on
the command line and when it is
echoed
,
the value should be empty:
$ echo $TEST_VARIABLE
Now we will
create a new
.envrc
file
that contains Bash code that will be loaded by
direnv
.
We also try to add the line "
export the
TEST_VARIABLE=tecmint
" in it using the
echo
command
and the output redirection character
(>)
:
$ echo export TEST_VARIABLE=tecmint > .envrc
By default,
the security mechanism blocks the loading of the
.envrc
file.
Since we know it a secure file, we need to approve its content by running the following command:
$ direnv allow .
Now that
the content of
.envrc
file
has been allowed to load, let's check the value of
TEST_VARIABLE
that
we set before:
$ echo $TEST_VARIABLE
When we
exit the
tecmint_project
directory,
the
direnv
will be unloaded and if we
check the value of
TEST_VARIABLE
once
more, it should be empty:
$ cd ..
$ echo $TEST_VARIABLE
Demonstration
of How direnv Works in Linux
Every time you move into the
tecmint_projects
directory,
the
.envrc
file
will be loaded as shown in the following screenshot:
$ cd tecmint_projects/
Loading
envrc File in a Directory
To revoke the authorization of a given
.envrc
,
use the
deny
command.
$ direnv deny . #in current directory
OR
$ direnv deny /path/to/.envrc
For more
information and usage instructions, see the
direnv
man
page:
$ man direnv
Additionally,
direnv
also uses a
stdlib
(
direnv-stdlib
)
comes with several functions that allow you to easily add new directories to your
PATH
and
do so much more.
The /proc files I find most valuable, especially for inherited system
discovery, are:
cmdline
cpuinfo
meminfo
version
And the most valuable of those are cpuinfo and meminfo .
Again, I'm not stating that other files don't have value, but these are the ones I've found
that have the most value to me. For example, the /proc/uptime file gives you the
system's uptime in seconds. For me, that's not particularly valuable. However, if I want that
information, I use the uptime command that also gives me a more readable version
of /proc/loadavg as well.
The value of this information is in how the kernel was booted because any switches or
special parameters will be listed here, too. And like all information under /proc
, it can be found elsewhere and usually with better formatting, but /proc files
are very handy when you can't remember the command or don't want to grep for
something.
/proc/cpuinfo
The /proc/cpuinfo file is the first file I check when connecting to a new
system. I want to know the CPU make-up of a system and this file tells me everything I need to
know.
This is a virtual machine and only has one vCPU. If your system contains more than one CPU,
the CPU numbering begins at 0 for the first CPU.
/proc/meminfo
The /proc/meminfo file is the second file I check on a new system. It gives me
a general and a specific look at a system's memory allocation and usage.
I think most sysadmins either use the free or the top command to
pull some of the data contained here. The /proc/meminfo file gives me a quick
memory overview that I like and can redirect to another file as a
snapshot.
/proc/version
The /proc/version command provides more information than the related
uname -a command does. Here are the two compared:
$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019
$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Usually, the uname -a command is sufficient to give you kernel version info but
for those of you who are developers or who are ultra-concerned with details, the
/proc/version file is there for you.
Wrapping up
The /proc filesystem has a ton of valuable information available to system
administrators who want a convenient, non-command way of getting at raw system info. As I
stated earlier, there are other ways to display the information in /proc .
Additionally, some of the /proc info isn't what you'd want to use for system
assessment. For example, use commands such as vmstat 5 5 or iostat 5
5 to get a better picture of system performance rather than reading one of the available
/proc files.
Sysadmin tales: How to keep calm and not panic when things break
When an incident occurs, resist the urge to freak out. Instead, use these tips to help you keep your cool and find
a solution.
I was working on several projects
simultaneously for a small company that had been carved out of a larger one that had gone out of business. The
smaller company had inherited some of the bigger company's infrastructure, and all the headaches along with it.
That day, I had some additional consultants working with me on a project to migrate email service from a large
proprietary onsite cluster to a cloud provider, while at the same time, I was working on reconfiguring a massive
storage array.
At some point, I clicked the wrong button.
All of a sudden, I started getting calls.
The CIO and the consultants were standing in front of my desk. The email servers were completely offline -- they
responded, but could not access the backing storage. I didn't know it yet, but I had deleted the storage pool for
the active email servers.
My vision blurred into a tunnel, and my
stomach fell into a bottomless pit. I struggled to breathe. I did my best to maintain a poker face as the
executives and consultants watched impatiently. I scanned logs and messages looking for clues. I ran tests on all
the components to find the source of the issue and came up with nothing. The data seemed to be gone, and panic was
setting in.
I pushed back from the desk and excused
myself to use the restroom. Closing and latching the door behind me, I contemplated my fate for a moment, then
splashed cold water on my face and took a deep breath. Then it dawned on me: earlier, I had set up an active
mirror of that storage pool. The data was all there; I just needed to reconnect it.
I returned to my desk and couldn't help a
bit of a smirk. A couple of commands, a couple of clicks, and a sip of coffee. About five minutes of testing, and
I could say, "Sorry, guys. Should be good now." The whole thing had happened in about 30 minutes.
We've all been there
Everyone makes mistakes, even the most
senior and venerable engineers and systems administrators. We're all human. It just so happens that, as a
sysadmin, a small mistake in a moment can cause very visible problems, and, PANIC. This is normal, though. What
separates the hero from the unemployed in that moment, can be just a few simple things.
When an incident occurs, focusing on who's
at fault can be tempting; blame is something we know how to do and can do something about, and it can even offer
some relief if we can tell ourselves it's not our fault. But in fact, blame accomplishes nothing and can be
counterproductive in a moment of crisis -- it can distract us from finding a solution to the problem, and create even
more stress.
Backups, backups, backups
This is just one of the times when having
a backup saved the day for me, and for a client. Every sysadmin I've ever worked with will tell you the same
thing -- always have a backup. Do regular backups. Make backups of configurations you are working on. Make a habit of
creating a backup as the first step in any project. There are some great articles here on Enable Sysadmin about
the various things you can do to protect yourself.
Another good practice is to never work on
production systems until you have tested the change. This may not always be possible, but if it is, the extra
effort and time will be well worth it for the rare occasions when you have an unexpected result, so you can avoid
the panic of wondering where you might have saved your most recent resume. Having a plan and being prepared can go
a long way to avoiding those very stressful situations.
Breathe in, breathe out
The panic response in humans is related to
the "fight or flight" reflex, which served our ancestors so well. It's a really useful resource for avoiding saber
tooth tigers (and angry CFOs), but not so much for understanding and solving complex technical problems.
Understanding that it's normal but not really helpful, we can recognize it and find a way to overcome it in the
moment.
The simplest way we can tame the impulse
to blackout and flee is to take a deep breath (or several). Studies have shown that simple breathing exercises and
meditation can improve our general outlook and ability to focus on a specific task. There is also evidence that
temperature changes can make a difference; something as simple as a splash of water on the face or an ice-cold
beverage can calm a panic. These things work for me.
Walk the path of troubleshooting, one step at a time
Once we have convinced ourselves that the
world is not going to end immediately, we can focus on solving the problem. Take the situation one element, one
step at a time to find what went wrong, then take that and apply the solution(s) systematically. Again, it's
important to focus on the problem and solution in front of you rather than worrying about things you can't do
anything about right now or what might happen later. Remember, blame is not helpful, and that includes blaming
yourself.
Most often, when I focus on the problem, I
find that I forget to panic, and I can do even better work on the solution. Many times, I have found solutions I
wouldn't have seen or thought of otherwise in this state.
Take five
Another thing that's easy to forget is
that, when you've been working on a problem, it's important to give yourself a break. Drink some water. Take a
short walk. Rest your brain for a couple of minutes. Hunger, thirst, and fatigue can lead to less clear thinking
and, you guessed it, panic.
Time to face the music
My last piece of advice -- though certainly
not the least important -- is, if you are responsible for an incident, be honest about what happened. This will
benefit you for both the short and long term.
During the early years of the space
program, the directors and engineers at NASA established a routine of getting together and going over what went
wrong and what and how to improve for the next time. The same thing happens in the military, emergency management,
and healthcare fields. It's also considered good agile/DevOps practice. Some of the smartest, highest-strung
engineers, administrators, and managers I've known and worked with -- people with millions of dollars and thousands
of lives in their area of responsibility -- have insisted on the importance of learning lessons from mistakes and
incidents. It's a mark of a true professional to own up to mistakes and work to improve.
It's hard to lose face, but not only will
your colleagues appreciate you taking responsibility and working to improve the team, but I promise you will rest
better and be able to manage the next problem better if you look at these situations as learning opportunities.
Accidents and mistakes can't ever be
avoided entirely, but hopefully, you will find some of this advice useful the next time you face an unexpected
challenge.
I set up a backup approach that software vendors refer to as instant restore, shadow
restore, preemptive restore, or similar term. We ran incremental backup jobs every hour and
restored the backups in the background to a new virtual machine. Each full hour, we had a
system ready that was four hours back in time and just needed to be finished. So if I choose to
restore the incremental from one hour ago, it would take less time than a complete system
restore because only the small increments had to be restored to the almost-ready virtual
machine.
And the effort paid off
One day, I was on vacation, having a barbecue and some beer, when I got a call from my
colleague telling me that the terminal server with the ERP application was broken due to a
failed update and the guy who ran the update forgot to take a snapshot first.
The only thing I needed to tell my colleague was to shut down the broken machine, find the
UI of our backup/restore system, and then identify the restore job. Finally, I told him how to
choose the timestamp from the last four hours when the restore should finish. The restore
finished 30 minutes later, and the system was ready to be used again. We were back in action
after a total of 30 minutes, and only the work from the last two hours or so was lost! Awesome!
Now, back to vacation.
In the first article
in this series, you created your first, very small, one-line Bash script and explored the
reasons for creating shell scripts. In the second article , you
began creating a fairly simple template that can be a starting point for other Bash programs
and began testing it. In the third article , you
created and used a simple Help function and learned about using functions and how to handle
command-line options such as -h .
This fourth and final article in the series gets into variables and initializing them as
well as how to do a bit of sanity testing to help ensure the program runs under the proper
conditions. Remember, the objective of this series is to build working code that will be used
for a template for future Bash programming projects. The idea is to make getting started on new
programming projects easy by having common elements already available in the
template.
Variables
The Bash shell, like all programming languages, can deal with variables. A variable is a
symbolic name that refers to a specific location in memory that contains a value of some sort.
The value of a variable is changeable, i.e., it is variable. If you are not familiar with using
variables, read my article How to program with
Bash: Syntax and tools before you go further.
Done? Great! Let's now look at some good practices when using variables.
I always set initial values for every variable used in my scripts. You can find this in
your template script immediately after the procedures as the first part of the main program
body, before it processes the options. Initializing each variable with an appropriate value can
prevent errors that might occur with uninitialized variables in comparison or math operations.
Placing this list of variables in one place allows you to see all of the variables that are
supposed to be in the script and their initial values.
Your little script has only a single variable, $option , so far. Set it by inserting the
following lines as
shown:
# Main program #
# Initialize variables
option = ""
# Process the input options. Add options as needed. #
Test this to ensure that everything works as it should and that nothing has broken as the
result of this change.
Constants
Constants are variables, too -- at least they should be. Use variables wherever possible in
command-line interface (CLI) programs instead of hard-coded values. Even if you think you will
use a particular value (such as a directory name, a file name, or a text string) just once,
create a variable and use it where you would have placed the hard-coded name.
For example, the message printed as part of the main body of the program is a string
literal, echo "Hello world!" . Change that to a variable. First, add the following statement to
the variable initialization section:
Msg="Hello world!"
And now change the last line of the program from:
echo "Hello world!"
to:
echo "$Msg"
Test the results.
Sanity checks
Sanity checks are simply tests for conditions that need to be true in order for the program
to work correctly, such as: the program must be run as the root user, or it must run on a
particular distribution and release of that distro. Add a check for root as the running
user in your simple program template.
Testing that the root user is running the program is easy because a program runs as the user
that launches it.
The id command can be used to determine the numeric user ID (UID) the program is running
under. It provides several bits of information when it is used without any options:
Add the following function to the program. I added it after the Help procedure, but you can
place it anywhere in the procedures section. The logic is that if the UID is not zero, which is
always the root user's UID, the program
exits:
################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}
Now, add a call to the CheckRoot procedure just before the variable's initialization. Test
this, first running the program as the student user:
[ student @ testvm1 ~ ] $ . / hello
ERROR: You must be root user to run this program
[ student @ testvm1 ~ ] $
You may not always need this particular sanity test, so comment out the call to CheckRoot
but leave all the code in place in the template. This way, all you need to do to use that code
in a future program is to uncomment the call.
The code
After making the changes outlined above, your code should look like
this:
#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################
################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}
################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
# If we are not running as root we exit the program
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}
################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Sanity checks #
################################################################################
# Are we rnning as root?
# CheckRoot
# Initialize variables
option = ""
Msg = "Hello world!"
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done
echo " $Msg " A final exercise
You probably noticed that the Help function in your code refers to features that are not in
the code. As a final exercise, figure out how to add those functions to the code template you
created.
Summary
In this article, you created a couple of functions to perform a sanity test for whether your
program is running as root. Your program is getting a little more complex, so testing is
becoming more important and requires more test paths to be complete.
This series looked at a very minimal Bash program and how to build a script up a bit at a
time. The result is a simple template that can be the starting point for other, more useful
Bash scripts and that contains useful elements that make it easy to start new scripts.
By now, you get the idea: Compiled programs are necessary and fill a very important need.
But for sysadmins, there is always a better way. Always use shell scripts to meet your job's
automation needs. Shell scripts are open; their content and purpose are knowable. They can be
readily modified to meet different requirements. I have never found anything that I need to do
in my sysadmin role that cannot be accomplished with a shell script.
What you have created so far in this series is just the beginning. As you write more Bash
programs, you will find more bits of code that you use frequently and should be included in
your program template.
In the first article
in this series, you created a very small, one-line Bash script and explored the reasons for
creating shell scripts and why they are the most efficient option for the system administrator,
rather than compiled programs.
In this second article, you will begin creating a Bash script template that can be used as a
starting point for other Bash scripts. The template will ultimately contain a Help facility, a
licensing statement, a number of simple functions, and some logic to deal with those options
and others that might be needed for the scripts that will be based on this template.
Like automation in general, the idea behind creating a template is to be the " lazy sysadmin ." A
template contains the basic components that you want in all of your scripts. It saves time
compared to adding those components to every new script and makes it easy to start a new
script.
Although it can be tempting to just throw a few command-line Bash statements together into a
file and make it executable, that can be counterproductive in the long run. A well-written and
well-commented Bash program with a Help facility and the capability to accept command-line
options provides a good starting point for sysadmins who maintain the program, which includes
the programs that you write and maintain.
The requirements
You should always create a set of requirements for every project you do. This includes
scripts, even if it is a simple list with only two or three items on it. I have been involved
in many projects that either failed completely or failed to meet the customer's needs, usually
due to the lack of a requirements statement or a poorly written one.
The requirements for this Bash template are pretty simple:
Create a template that can be used as the starting point for future Bash programming
projects.
The template should follow standard Bash programming practices.
It must include:
A heading section that can be used to describe the function of the program and a
changelog
A licensing statement
A section for functions
A Help function
A function to test whether the program user is root
A method for evaluating command-line options
The basic structure
A basic Bash script has three sections. Bash has no way to delineate sections, but the
boundaries between the sections are implicit.
All scripts must begin with the shebang ( #! ), and this must be the first line in any
Bash program.
The functions section must begin after the shebang and before the body of the program. As
part of my need to document everything, I place a comment before each function with a short
description of what it is intended to do. I also include comments inside the functions to
elaborate further. Short, simple programs may not need functions.
The main part of the program comes after the function section. This can be a single Bash
statement or thousands of lines of code. One of my programs has a little over 200 lines of
code, not counting comments. That same program has more than 600 comment lines.
That is all there is -- just three sections in the structure of any Bash
program.
Leading comments
I always add more than this for various reasons. First, I add a couple of sections of
comments immediately after the shebang. These comment sections are optional, but I find them
very helpful.
The first comment section is the program name and description and a change history. I
learned this format while working at IBM, and it provides a method of documenting the long-term
development of the program and any fixes applied to it. This is an important start in
documenting your program.
The second comment section is a copyright and license statement. I use GPLv2, and this seems
to be a standard statement for programs licensed under GPLv2. If you use a different open
source license, that is fine, but I suggest adding an explicit statement to the code to
eliminate any possible confusion about licensing. Scott Peterson's article The source code is the
license helps explain the reasoning behind this.
So now the script looks like this:
#!/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################
echo "hello world!"
Run the revised program to verify that it still works as expected.
About testing
Now is a good time to talk about testing.
" There is always one more bug."
-- Lubarsky's Law of Cybernetic Entomology
Lubarsky -- whoever that might be -- is correct. You can never find all the bugs in your
code. For every bug I find, there always seems to be another that crops up, usually at a very
inopportune time.
Testing is not just about programs. It is also about verification that problems -- whether
caused by hardware, software, or the seemingly endless ways users can find to break things --
that are supposed to be resolved actually are. Just as important, testing is also about
ensuring that the code is easy to use and the interface makes sense to the user.
Following a well-defined process when writing and testing shell scripts can contribute to
consistent and high-quality results. My process is simple:
Create a simple test plan.
Start testing right at the beginning of development.
Perform a final test when the code is complete.
Move to production and test more.
The test plan
There are lots of different formats for test plans. I have worked with the full range --
from having it all in my head; to a few notes jotted down on a sheet of paper; and all the way
to a complex set of forms that require a full description of each test, which functional code
it would test, what the test would accomplish, and what the inputs and results should be.
Speaking as a sysadmin who has been (but is not now) a tester, I try to take the middle
ground. Having at least a short written test plan will ensure consistency from one test run to
the next. How much detail you need depends upon how formal your development and test functions
are.
The sample test plan documents I found using Google were complex and intended for large
organizations with very formal development and test processes. Although those test plans would
be good for people with "test" in their job title, they do not apply well to sysadmins' more
chaotic and time-dependent working conditions. As in most other aspects of the job, sysadmins
need to be creative. So here is a short list of things to consider including in your test plan.
Modify it to suit your needs:
The name and a short description of the software being tested
A description of the software features to be tested
The starting conditions for each test
The functions to follow for each test
A description of the desired outcome for each test
Specific tests designed to test for negative outcomes
Tests for how the program handles unexpected inputs
A clear description of what constitutes pass or fail for each test
Fuzzy testing, which is described below
This list should give you some ideas for creating your test plans. Most sysadmins should
keep it simple and fairly informal.
Test early -- test often
I always start testing my shell scripts as soon as I complete the first portion that is
executable. This is true whether I am writing a short command-line program or a script that is
an executable file.
I usually start creating new programs with the shell script template. I write the code for
the Help function and test it. This is usually a trivial part of the process, but it helps me
get started and ensures that things in the template are working properly at the outset. At this
point, it is easy to fix problems with the template portions of the script or to modify it to
meet needs that the standard template does not.
Once the template and Help function are working, I move on to creating the body of the
program by adding comments to document the programming steps required to meet the program
specifications. Now I start adding code to meet the requirements stated in each comment. This
code will probably require adding variables that are initialized in that section of the
template -- which is now becoming a shell script.
This is where testing is more than just entering data and verifying the results. It takes a
bit of extra work. Sometimes I add a command that simply prints the intermediate result of the
code I just wrote and verify that. For more complex scripts, I add a -t option for "test mode."
In this case, the internal test code executes only when the -t option is entered on the command
line.
Final testing
After the code is complete, I go back to do a complete test of all the features and
functions using known inputs to produce specific outputs. I also test some random inputs to see
if the program can handle unexpected input.
Final testing is intended to verify that the program is functioning essentially as intended.
A large part of the final test is to ensure that functions that worked earlier in the
development cycle have not been broken by code that was added or changed later in the
cycle.
If you have been testing the script as you add new code to it, you may think there should
not be any surprises during the final test. Wrong! There are always surprises during final
testing. Always. Expect those surprises, and be ready to spend time fixing them. If there were
never any bugs discovered during final testing, there would be no point in doing a final test,
would there?
Testing in production
Huh -- what?
"Not until a program has been in production for at least six months will the most harmful
error be discovered."
-- Troutman's Programming Postulates
Yes, testing in production is now considered normal and desirable. Having been a tester
myself, this seems reasonable. "But wait! That's dangerous," you say. My experience is that it
is no more dangerous than extensive and rigorous testing in a dedicated test environment. In
some cases, there is no choice because there is no test environment -- only production.
Sysadmins are no strangers to the need to test new or revised scripts in production. Anytime
a script is moved into production, that becomes the ultimate test. The production environment
constitutes the most critical part of that test. Nothing that testers can dream up in a test
environment can fully replicate the true production environment.
The allegedly new practice of testing in production is just the recognition of what
sysadmins have known all along. The best test is production -- so long as it is not the only
test.
Fuzzy testing
This is another of those buzzwords that initially caused me to roll my eyes. Its essential
meaning is simple: have someone bang on the keys until something happens, and see how well the
program handles it. But there really is more to it than that.
Fuzzy testing is a bit like the time my son broke the code for a game in less than a minute
with random input. That pretty much ended my attempts to write games for him.
Most test plans utilize very specific input that generates a specific result or output.
Regardless of whether the test defines a positive or negative outcome as a success, it is still
controlled, and the inputs and results are specified and expected, such as a specific error
message for a specific failure mode.
Fuzzy testing is about dealing with randomness in all aspects of the test, such as starting
conditions, very random and unexpected input, random combinations of options selected, low
memory, high levels of CPU contending with other programs, multiple instances of the program
under test, and any other random conditions that you can think of to apply to the tests.
I try to do some fuzzy testing from the beginning. If the Bash script cannot deal with
significant randomness in its very early stages, then it is unlikely to get better as you add
more code. This is a good time to catch these problems and fix them while the code is
relatively simple. A bit of fuzzy testing at each stage is also useful in locating problems
before they get masked by even more code.
After the code is completed, I like to do some more extensive fuzzy testing. Always do some
fuzzy testing. I have certainly been surprised by some of the results. It is easy to test for
the expected things, but users do not usually do the expected things with a
script.
Previews of coming attractions
This article accomplished a little in the way of creating a template, but it mostly talked
about testing. This is because testing is a critical part of creating any kind of program. In
the next article in this series, you will add a basic Help function along with some code to
detect and act on options, such as -h , to your Bash script template.
Software developers writing applications in languages such as Java, Ruby, and Python have
sophisticated libraries to help them maintain their software's integrity over time. They create
tests that run applications through a series of executions in structured environments to ensure
all of their software's aspects work as expected.
These tests are even more powerful when they're automated in a continuous integration (CI)
system, where every push to the source repository causes the tests to run, and developers are
immediately notified when tests fail. This fast feedback increases developers' confidence in
the functional integrity of their applications.
The Bash Automated Testing System ( BATS ) enables developers writing Bash scripts and
libraries to apply the same practices used by Java, Ruby, Python, and other developers to their
Bash code.
Installing BATS
The BATS GitHub page includes installation instructions. There are two BATS helper libraries
that provide more powerful assertions or allow overrides to the Test Anything Protocol (
TAP ) output format used by BATS. These
can be installed in a standard location and sourced by all scripts. It may be more convenient
to include a complete version of BATS and its helper libraries in the Git repository for each
set of scripts or libraries being tested. This can be accomplished using the git submodule system.
The following commands will install BATS and its helper libraries into the test directory in
a Git repository.
To clone a Git repository and install its submodules at the same time, use the
--recurse-submodules flag to git clone .
Each BATS test script must be executed by the bats executable. If you installed BATS into
your source code repo's test/libs directory, you can invoke the test with:
./test/libs/bats/bin/bats <path to test script>
Alternatively, add the following to the beginning of each of your BATS test
scripts:
and chmod +x <path to test script> . This will a) make them executable with the BATS
installed in ./test/libs/bats and b) include these helper libraries. BATS test scripts are
typically stored in the test directory and named for the script being tested, but with the
.bats extension. For example, a BATS script that tests bin/build should be called
test/build.bats .
You can also run an entire set of BATS test files by passing a regular expression to BATS,
e.g., ./test/lib/bats/bin/bats test/*.bats .
Organizing libraries and scripts for BATS
coverage
Bash scripts and libraries must be organized in a way that efficiently exposes their inner
workings to BATS. In general, library functions and shell scripts that run many commands when
they are called or executed are not amenable to efficient BATS testing.
For example, build.sh is a typical script
that many people write. It is essentially a big pile of code. Some might even put this pile of
code in a function in a library. But it's impossible to run a big pile of code in a BATS test
and cover all possible types of failures it can encounter in separate test cases. The only way
to test this pile of code with sufficient coverage is to break it into many small, reusable,
and, most importantly, independently testable functions.
It's straightforward to add more functions to a library. An added benefit is that some of
these functions can become surprisingly useful in their own right. Once you have broken your
library function into lots of smaller functions, you can source the library in your BATS test
and run the functions as you would any other command to test them.
Bash scripts must also be broken down into multiple functions, which the main part of the
script should call when the script is executed. In addition, there is a very useful trick to
make it much easier to test Bash scripts with BATS: Take all the code that is executed in the
main part of the script and move it into a function, called something like run_main . Then, add
the following to the end of the script:
if [[ " ${BASH_SOURCE[0]} " == " ${0} " ]]
then
run_main
fi
This bit of extra code does something special. It makes the script behave differently when
it is executed as a script than when it is brought into the environment with source . This
trick enables the script to be tested the same way a library is tested, by sourcing it and
testing the individual functions. For example, here is build.sh refactored for better
BATS testability .
Writing and running tests
As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that
will be familiar to those who have used other TAP-compliant testing suites, such as JUnit,
RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are
organized into one or more descriptive @test blocks that describe the unit of the application
being tested. Each @test block will run a series of commands that prepares the test
environment, runs the command to be tested, and makes assertions about the exit and output of
the tested command. Many assertion functions are imported with the bats , bats-assert , and
bats-support libraries, which are loaded into the environment at the beginning of the BATS test
script. Here is a typical BATS test block:
@ test "requires CI_COMMIT_REF_SLUG environment
variable" {
unset CI_COMMIT_REF_SLUG
assert_empty " ${CI_COMMIT_REF_SLUG} "
run some_command
assert_failure
assert_output --partial "CI_COMMIT_REF_SLUG"
}
If a BATS script includes setup and/or teardown functions, they are automatically executed
by BATS before and after each test block runs. This makes it possible to create environment
variables, test files, and do other things needed by one or all tests, then tear them down
after each test runs. Build.bats is a full
BATS test of our newly formatted build.sh script. (The mock_docker command in this test will be
explained below, in the section on mocking/stubbing.)
When the test script runs, BATS uses exec to run each @test block as a separate subprocess.
This makes it possible to export environment variables and even functions in one @test without
affecting other @test s or polluting your current shell session. The output of a test run is a
standard format that can be understood by humans and parsed or manipulated programmatically by
TAP consumers. Here is an example of the output for the CI_COMMIT_REF_SLUG test block when it
fails:
✗ requires CI_COMMIT_REF_SLUG environment variable
( from function ` assert_output ' in file test/libs/bats-assert/src/assert.bash, line 231,
in test file test/ci_deploy.bats, line 26)
`assert_output --partial "CI_COMMIT_REF_SLUG"' failed
-- output does not contain substring --
substring ( 1 lines ) :
CI_COMMIT_REF_SLUG
output ( 3 lines ) :
. / bin / deploy.sh: join_string_by: command not found
oc error
Could not login
--
Like any shell script or library, BATS test scripts can include helper libraries to share
common code across tests or enhance their capabilities. These helper libraries, such as
bats-assert and bats-support , can even be tested with BATS.
Libraries can be placed in the same test directory as the BATS scripts or in the test/libs
directory if the number of files in the test directory gets unwieldy. BATS provides the load
function that takes a path to a Bash file relative to the script being tested (e.g., test , in
our case) and sources that file. Files must end with the prefix .bash , but the path to the
file passed to the load function can't include the prefix. build.bats loads the bats-assert and
bats-support libraries, a small helpers.bash library,
and a docker_mock.bash library (described below) with the following code placed at the
beginning of the test script below the interpreter magic line:
load
'libs/bats-support/load'
load 'libs/bats-assert/load'
load 'helpers'
load 'docker_mock' Stubbing test input and mocking external calls
The majority of Bash scripts and libraries execute functions and/or executables when they
run. Often they are programmed to behave in specific ways based on the exit status or output (
stdout , stderr ) of these functions or executables. To properly test these scripts, it is
often necessary to make fake versions of these commands that are designed to behave in a
specific way during a specific test, a process called "stubbing." It may also be necessary to
spy on the program being tested to ensure it calls a specific command, or it calls a specific
command with specific arguments, a process called "mocking." For more on this, check out this
great discussion of mocking and
stubbing in Ruby RSpec, which applies to any testing system.
The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and
stubbing. All require the use of the Bash export command with the -f flag to export a function
that overrides the original function or executable. This must be done before the tested program
is executed. Here is a simple example that overrides the cat executable:
function cat () {
echo "THIS WOULD CAT ${*} " }
export -f cat
This method overrides a function in the same manner. If a test needs to override a function
within the script or library being tested, it is important to source the tested script or
library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced
with the actual function when the script is sourced. Also, make sure to stub/mock before you
run the command you're testing. Here is an example from build.bats that mocks the raise
function described in build.sh to ensure a specific error message is raised by the login
fuction:
@ test ".login raises on oc error" {
source ${profile_script}
function raise () { echo " ${1} raised" ; }
export -f raise
run login
assert_failure
assert_output -p "Could not login raised"
}
Normally, it is not necessary to unset a stub/mock function after the test, since export
only affects the current subprocess during the exec of the current @test block. However, it is
possible to mock/stub commands (e.g. cat , sed , etc.) that the BATS assert * functions use
internally. These mock/stub functions must be unset before these assert commands are run, or
they will not work properly. Here is an example from build.bats that mocks sed , runs the
build_deployable function, and unsets sed before running any assertions:
@ test
".build_deployable prints information, runs docker build on a modified Dockerfile.production
and publish_image when its not a dry_run" {
local expected_dockerfile = 'Dockerfile.production'
local application = 'application'
local environment = 'environment'
local expected_original_base_image = " ${application} "
local expected_candidate_image = " ${application} -candidate: ${environment} "
local expected_deployable_image = " ${application} : ${environment} "
source ${profile_script}
mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg
DDS_API_BASE_URL -t " ${expected_deployable_image} " -
function publish_image () { echo "publish_image ${*} " ; }
export -f publish_image
function sed () {
echo "sed ${*} " >& 2 ;
echo "FROM application-candidate:environment" ;
}
export -f sed
run build_deployable " ${application} " " ${environment} "
assert_success
unset sed
assert_output --regexp "sed.* ${expected_dockerfile} "
assert_output -p "Building ${expected_original_base_image} deployable
${expected_deployable_image} FROM ${expected_candidate_image} "
assert_output -p "FROM ${expected_candidate_image} piped"
assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg
DDS_API_BASE_URL -t ${expected_deployable_image} -"
assert_output -p "publish_image ${expected_deployable_image} "
}
Sometimes the same command, e.g. foo, will be invoked multiple times, with different
arguments, in the same function being tested. These situations require the creation of a set of
functions:
mock_foo: takes expected arguments as input, and persists these to a TMP file
foo: the mocked version of the command, which processes each call with the persisted list
of expected arguments. This must be exported with export -f.
cleanup_foo: removes the TMP file, for use in teardown functions. This can test to ensure
that a @test block was successful before removing.
Since this functionality is often reused in different tests, it makes sense to create a
helper library that can be loaded like other libraries.
A good example is docker_mock.bash
. It is loaded into build.bats and used in any test block that tests a function that calls the
Docker executable. A typical test block using docker_mock looks like:
@ test ".publish_image
fails if docker push fails" {
setup_publish
local expected_image = "image"
local expected_publishable_image = " ${CI_REGISTRY_IMAGE} / ${expected_image} "
source ${profile_script}
mock_docker tag " ${expected_image} " " ${expected_publishable_image} "
mock_docker push " ${expected_publishable_image} " and_fail
run publish_image " ${expected_image} "
assert_failure
assert_output -p "tagging ${expected_image} as ${expected_publishable_image} "
assert_output -p "tag ${expected_image} ${expected_publishable_image} "
assert_output -p "pushing image to gitlab registry"
assert_output -p "push ${expected_publishable_image} "
}
This test sets up an expectation that Docker will be called twice with different arguments.
With the second call to Docker failing, it runs the tested command, then tests the exit status
and expected calls to Docker.
One aspect of BATS introduced by mock_docker.bash is the ${BATS_TMPDIR} environment
variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP
files in a standard location. The mock_docker.bash library will not delete its persisted mocks
file if a test fails, but it will print where it is located so it can be viewed and deleted.
You may need to periodically clean old mock files out of this directory.
One note of caution regarding mocking/stubbing: The build.bats test consciously violates a
dictum of testing that states: Don't
mock what you don't own! This dictum demands that calls to commands that the test's
developer didn't write, like docker , cat , sed , etc., should be wrapped in their own
libraries, which should be mocked in tests of scripts that use them. The wrapper libraries
should then be tested without mocking the external commands.
This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the
test scripts will not detect this change, resulting in a false positive that won't manifest
until the tested build.sh script runs in a production setting with the new version of Docker.
Test developers must decide how stringently they want to adhere to this standard, but they
should understand the tradeoffs involved with their decision.
Conclusion
Introducing a testing regime to any software development project creates a tradeoff between
a) the increase in time and organization required to develop and maintain code and tests and b)
the increased confidence developers have in the integrity of the application over its lifetime.
Testing regimes may not be appropriate for all scripts and libraries.
In general, scripts and libraries that meet one or more of the following should be tested
with BATS:
They are worthy of being stored in source control
They are used in critical processes and relied upon to run consistently for a long period
of time
They need to be modified periodically to add/remove/modify their function
They are used by others
Once the decision is made to apply a testing discipline to one or more Bash scripts or
libraries, BATS provides the comprehensive testing features that are available in other
software development environments.
Acknowledgment: I am indebted to Darrin Mann for introducing me to BATS testing.
6 handy Bash scripts for GitThese six Bash scripts will make your life easier
when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86
up 2 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
More on Git
I wrote a bunch of Bash scripts that make my life easier when I'm working with Git
repositories. Many of my colleagues say there's no need; that everything I need to do can be
done with Git commands. While that may be true, I find the scripts infinitely more convenient
than trying to figure out the appropriate Git command to do what I want. 1. gitlog
gitlog prints an abbreviated list of current patches against the master version. It prints
them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^
, 2 for HEAD~2, and so forth. For example:
$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
If I want to see what patches are on a different branch, I can specify an alternate
branch:
Again, it assumes the current branch, but I can specify a different branch if I
want.
3. gitlog.id2
gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy
for cherry-picking all patches from one branch to the current branch:
$ # create a new
branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep
gitlog.grep greps for a string within that collection of patches. For example, if I find a
bug and want to fix the patch that has a reference to function inode_go_sync , I simply
do:
$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10
to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary
adjustments.
5. gitbranchcmp3
gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older
versions of patches to my newer versions and quickly see what's changed and what hasn't. It
generates a compare script (that uses the KDE tool Kompare , which works on GNOME3,
as well) to compare the patches that aren't quite the same. If there are no differences other
than line numbers, it prints [SAME] . If there are only comment differences, it prints [same]
(in lower case). For example:
$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches
(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of
glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in
gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time
Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find
Finally, I have gitlog.find , a script to help me identify where the upstream versions of my
patches are and each patch's current status. It does this by matching the patch description. It
also generates a compare script (again, using Kompare) to compare the current patch to the
upstream counterpart:
$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in
gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh
The patches are shown on two lines, the first of which is your current patch, followed by
the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream
status:
lo means the patch is in the local upstream Git repo only (i.e., not pushed upstream
yet).
ms means the patch is in Linus Torvald's master branch.
fn means the patch is pushed to my "for-next" development branch, intended for the next
upstream merge window.
Some of my scripts make assumptions based on how I normally work with Git. For example, when
searching for upstream patches, it uses my well-known Git tree's location. So, you will need to
adjust or improve them to suit your conditions. The gitlog.find script is designed to locate
GFS2 and DLM patches only, so unless
you're a GFS2 developer, you will want to customize it to the components that interest
you.
Source code
Here is the source for these scripts.
1. gitlog #!/bin/bash
branch = $1
if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo
for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp /
gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp /
gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done
echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done
How to add a Help facility to your Bash programIn the third article in this
series, learn about using functions as you create a simple Help facility for your Bash
script. 20 Dec 2019 David Both
(Correspondent) Feed 53
up Image by : Opensource.com x Subscribe now
In the first article
in this series, you created a very small, one-line Bash script and explored the reasons for
creating shell scripts and why they are the most efficient option for the system administrator,
rather than compiled programs. In the second article , you
began the task of creating a fairly simple template that you can use as a starting point for
other Bash programs, then explored ways to test it.
This third of the four articles in this series explains how to create and use a simple Help
function. While creating your Help facility, you will also learn about using functions and how
to handle command-line options such as -h .
Even fairly simple Bash programs should have some sort of Help facility, even if it is
fairly rudimentary. Many of the Bash shell programs I write are used so infrequently that I
forget the exact syntax of the command I need. Others are so complex that I need to review the
options and arguments even when I use them frequently.
Having a built-in Help function allows you to view those things without having to inspect
the code itself. A good and complete Help facility is also a part of program
documentation.
About functions
Shell functions are lists of Bash program statements that are stored in the shell's
environment and can be executed, like any other command, by typing their name at the command
line. Shell functions may also be known as procedures or subroutines, depending upon which
other programming language you are using.
Functions are called in scripts or from the command-line interface (CLI) by using their
names, just as you would for any other command. In a CLI program or a script, the commands in
the function execute when they are called, then the program flow sequence returns to the
calling entity, and the next series of program statements in that entity executes.
The syntax of a function is:
FunctionName(){program statements}
Explore this by creating a simple function at the CLI. (The function is stored in the shell
environment for the shell instance in which it is created.) You are going to create a function
called hw , which stands for "hello world." Enter the following code at the CLI and press Enter
. Then enter hw as you would any other shell command:
OK, so I am a little tired of the standard "Hello world" starter. Now, list all of the
currently defined functions. There are a lot of them, so I am showing just the new hw function.
When it is called from the command line or within a program, a function performs its programmed
task and then exits and returns control to the calling entity, the command line, or the next
Bash program statement in a script after the calling statement:
Remove that function because you do not need it anymore. You can do that with the unset
command:
[ student @ testvm1 ~ ] $ unset -f hw ; hw
bash: hw: command not found
[ student @ testvm1 ~ ] $ Creating the Help function
Open the hello program in an editor and add the Help function below to the hello program
code after the copyright statement but before the echo "Hello world!" statement. This Help
function will display a short description of the program, a syntax diagram, and short
descriptions of the available options. Add a call to the Help function to test it and some
comment lines that provide a visual demarcation between the functions and the main portion of
the
program:
################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}
################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
Help
echo "Hello world!"
The options described in this Help function are typical for the programs I write, although
none are in the code yet. Run the program to test it:
[ student @ testvm1 ~ ] $ . /
hello
Add description of the script functions here.
Syntax: scriptTemplate [ -g | h | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.
Hello world !
[ student @ testvm1 ~ ] $
Because you have not added any logic to display Help only when you need it, the program will
always display the Help. Since the function is working correctly, read on to add some logic to
display the Help only when the -h option is used when you invoke the program at the command
line.
Handling options
A Bash script's ability to handle command-line options such as -h gives some powerful
capabilities to direct the program and modify what it does. In the case of the -h option, you
want the program to print the Help text to the terminal session and then quit without running
the rest of the program. The ability to process options entered at the command line can be
added to the Bash script using the while command (see How to program with Bash:
Loops to learn more about while ) in conjunction with the getops and case commands.
The getops command reads any and all options specified at the command line and creates a
list of those options. In the code below, the while command loops through the list of options
by setting the variable $options for each. The case statement is used to evaluate each option
in turn and execute the statements in the corresponding stanza. The while statement will
continue to evaluate the list of options until they have all been processed or it encounters an
exit statement, which terminates the program.
Be sure to delete the Help function call just before the echo "Hello world!" statement so
that the main body of the program now looks like
this:
################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
esac
done
echo "Hello world!"
Notice the double semicolon at the end of the exit statement in the case option for -h .
This is required for each option added to this case statement to delineate the end of each
option.
Testing
Testing is now a little more complex. You need to test your program with a number of
different options -- and no options -- to see how it responds. First, test with no options to
ensure that it prints "Hello world!" as it should:
[ student @ testvm1 ~ ] $ . / hello
Hello world !
That works, so now test the logic that displays the Help text:
[ student @ testvm1 ~ ] $
. / hello -h
Add description of the script functions here.
Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.
That works as expected, so try some testing to see what happens when you enter some
unexpected options:
Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.
[ student @ testvm1 ~ ] $
The program just ignores any options without specific responses without generating any
errors. But notice the last entry (with -lkjsahdf for options): because there is an h in the
list of options, the program recognizes it and prints the Help text. This testing has shown
that the program doesn't have the ability to handle incorrect input and terminate the program
if any is detected.
You can add another case stanza to the case statement to match any option that doesn't have
an explicit match. This general case will match anything you have not provided a specific match
for. The case statement now looks like this, with the catch-all match of \? as the last case.
Any additional specific cases must precede this final one:
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done
Test the program again using the same options as before and see how it works
now.
Where you are
You have accomplished a good amount in this article by adding the capability to process
command-line options and a Help procedure. Your Bash script now looks like
this:
#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################
################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|t|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}
################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done
echo "Hello world!"
Be sure to test this version of the program very thoroughly. Use random inputs and see what
happens. You should also try testing valid and invalid options without using the dash ( - ) in
front.
Next time
In this article, you added a Help function as well as the ability to process command-line
options to display it selectively. The program is getting a little more complex, so testing is
becoming more important and requires more test paths in order to be complete.
The next article will look at initializing variables and doing a bit of sanity checking to
ensure that the program will run under the correct set of conditions.
Navigating the Bash shell with pushd and popdPushd and popd are the fastest
navigational commands you've never heard of. 07 Aug 2019 Seth Kenlon (Red Hat) Feed 71
up 7 comments Image by : Opensource.com x Subscribe now
The pushd and popd commands are built-in features of the Bash shell to help you "bookmark"
directories for quick navigation between locations on your hard drive. You might already feel
that the terminal is an impossibly fast way to navigate your computer; in just a few key
presses, you can go anywhere on your hard drive, attached storage, or network share. But that
speed can break down when you find yourself going back and forth between directories, or when
you get "lost" within your filesystem. Those are precisely the problems pushd and popd can help
you solve.
pushd
At its most basic, pushd is a lot like cd . It takes you from one directory to another.
Assume you have a directory called one , which contains a subdirectory called two , which
contains a subdirectory called three , and so on. If your current working directory is one ,
then you can move to two or three or anywhere with the cd command:
$ pwd
one
$ cd two / three
$ pwd
three
You can do the same with pushd :
$ pwd
one
$ pushd two / three
~ / one / two / three ~ / one
$ pwd
three
The end result of pushd is the same as cd , but there's an additional intermediate result:
pushd echos your destination directory and your point of origin. This is your directory
stack , and it is what makes pushd unique.
Stacks
A stack, in computer terminology, refers to a collection of elements. In the context of this
command, the elements are directories you have recently visited by using the pushd command. You
can think of it as a history or a breadcrumb trail.
You can move all over your filesystem with pushd ; each time, your previous and new
locations are added to the stack:
$ pushd four
~ / one / two / three / four ~ / one / two / three ~ / one
$ pushd five
~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three ~ / one
Navigating the stack
Once you've built up a stack, you can use it as a collection of bookmarks or fast-travel
waypoints. For instance, assume that during a session you're doing a lot of work within the
~/one/two/three/four/five directory structure of this example. You know you've been to one
recently, but you can't remember where it's located in your pushd stack. You can view your
stack with the +0 (that's a plus sign followed by a zero) argument, which tells pushd not to
change to any directory in your stack, but also prompts pushd to echo your current stack:
$
pushd + 0
~ / one / two / three / four ~ / one / two / three ~ / one ~ / one / two / three / four / five
Alternatively, you can view the stack with the dirs command, and you can see the index
number for each directory by using the -v option:
$ dirs -v
0 ~ / one / two / three / four
1 ~ / one / two / three
2 ~ / one
3 ~ / one / two / three / four / five
The first entry in your stack is your current location. You can confirm that with pwd as
usual:
$ pwd
~ / one / two / three / four
Starting at 0 (your current location and the first entry of your stack), the second
element in your stack is ~/one , which is your desired destination. You can move forward in
your stack using the +2 option:
$ pushd + 2
~ / one ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two /
three
$ pwd
~ / one
This changes your working directory to ~/one and also has shifted the stack so that your new
location is at the front.
You can also move backward in your stack. For instance, to quickly get to ~/one/two/three
given the example output, you can move back by one, keeping in mind that pushd starts with
0:
$ pushd -0
~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four
Adding to the stack
You can continue to navigate your stack in this way, and it will remain a static listing of
your recently visited directories. If you want to add a directory, just provide the directory's
path. If a directory is new to the stack, it's added to the list just as you'd expect:
$
pushd / tmp
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three /
four
But if it already exists in the stack, it's added a second time:
$ pushd ~ / one
~ / one / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two /
three / four
While the stack is often used as a list of directories you want quick access to, it is
really a true history of where you've been. If you don't want a directory added redundantly to
the stack, you must use the +N and -N notation.
Removing directories from the stack
Your stack is, obviously, not immutable. You can add to it with pushd or remove items from
it with popd .
For instance, assume you have just used pushd to add ~/one to your stack, making ~/one your
current working directory. To remove the first (or "zeroeth," if you prefer) element:
$
pwd
~ / one
$ popd + 0
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three /
four
$ pwd
~ / one
Of course, you can remove any element, starting your count at 0:
$ pwd ~ / one
$ popd + 2
/ tmp ~ / one / two / three ~ / one / two / three / four / five ~ / one / two / three /
four
$ pwd ~ / one
You can also use popd from the back of your stack, again starting with 0. For example, to
remove the final directory from your stack:
$ popd -0
/ tmp ~ / one / two / three ~ / one / two / three / four / five
When used like this, popd does not change your working directory. It only manipulates your
stack.
Navigating with popd
The default behavior of popd , given no arguments, is to remove the first (zeroeth) item
from your stack and make the next item your current working directory.
This is most useful as a quick-change command, when you are, for instance, working in two
different directories and just need to duck away for a moment to some other location. You don't
have to think about your directory stack if you don't need an elaborate history:
$ pwd
~ / one
$ pushd ~ / one / two / three / four / five
$ popd
$ pwd
~ / one
You're also not required to use pushd and popd in rapid succession. If you use pushd to
visit a different location, then get distracted for three hours chasing down a bug or doing
research, you'll find your directory stack patiently waiting (unless you've ended your terminal
session):
$ pwd ~ / one
$ pushd / tmp
$ cd { / etc, / var, / usr } ; sleep 2001
[ ... ]
$ popd
$ pwd
~ / one Pushd and popd in the real world
The pushd and popd commands are surprisingly useful. Once you learn them, you'll find
excuses to put them to good use, and you'll get familiar with the concept of the directory
stack. Getting comfortable with pushd was what helped me understand git stash , which is
entirely unrelated to pushd but similar in conceptual intangibility.
Using pushd and popd in shell scripts can be tempting, but generally, it's probably best to
avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're
re-reading a script ( pushd +3 is less clear than cd $HOME/$DIR/$TMP or similar).
Thank you for the write up for pushd and popd. I gotta remember to use these when I'm
jumping around directories a lot. I got a hung up on a pushd example because my development
work using arrays differentiates between the index and the count. In my experience, a
zero-based array of A, B, C; C has an index of 2 and also is the third element. C would not
be considered the second element cause that would be confusing it's index and it's count.
Interesting point, Matt. The difference between count and index had not occurred to me,
but I'll try to internalise it. It's a great distinction, so thanks for bringing it up!
It can be, but start out simple: use pushd to change to one directory, and then use popd
to go back to the original. Sort of a single-use bookmark system.
Then, once you're comfortable with pushd and popd, branch out and delve into the
stack.
A tcsh shell I used at an old job didn't have pushd and popd, so I used to have functions
in my .cshrc to mimic just the back-and-forth use.
Thanks for that tip, Jake. I arguably should have included that in the article, but I
wanted to try to stay focused on just the two {push,pop}d commands. Didn't occur to me to
casually mention one use of dirs as you have here, so I've added it for posterity.
There's so much in the Bash man and info pages to talk about!
other_Stu on 11 Aug 2019
I use "pushd ." (dot for current directory) quite often. Like a working directory bookmark
when you are several subdirectories deep somewhere, and need to cd to couple of other places
to do some work or check something.
And you can use the cd command with your DIRSTACK as well, thanks to tilde expansion.
cd ~+3 will take you to the same directory as pushd +3 would.
An introduction to parameter expansion in BashGet started with this quick how-to
guide on expansion modifiers that transform Bash variables and other parameters into powerful
tools beyond simple value stores. 13 Jun 2017 James Pannacciulli Feed 366
up 4 comments Image by : Opensource.com x Subscribe now
In Bash, entities that store values are known as parameters. Their values can be strings or
arrays with regular syntax, or they can be integers or associative arrays when special
attributes are set with the declare built-in. There are three types of parameters:
positional parameters, special parameters, and variables.
For the sake of brevity, this article will focus on a few classes of expansion methods
available for string variables, though these methods apply equally to other types of
parameters.
Variable assignment and unadulterated expansion
When assigning a variable, its name must be comprised solely of alphanumeric and underscore
characters, and it may not begin with a numeral. There may be no spaces around the equal sign;
the name must immediately precede it and the value immediately follow:
$ variable_1="my content"
Storing a value in a variable is only useful if we recall that value later; in Bash,
substituting a parameter reference with its value is called expansion. To expand a parameter,
simply precede the name with the $ character, optionally enclosing the name in
braces:
$ echo $variable_1 ${variable_1} my content my content
Crucially, as shown in the above example, expansion occurs before the command is called, so
the command never sees the variable name, only the text passed to it as an argument that
resulted from the expansion. Furthermore, parameter expansion occurs before word
splitting; if the result of expansion contains spaces, the expansion should be quoted to
preserve parameter integrity, if desired:
$ printf "%s\n" ${variable_1} my content $ printf "%s\n" "${variable_1}" my content
Parameter expansion modifiers
Parameter expansion goes well beyond simple interpolation, however. Inside the braces of a
parameter expansion, certain operators, along with their arguments, may be placed after the
name, before the closing brace. These operators may invoke conditional, subset, substring,
substitution, indirection, prefix listing, element counting, and case modification expansion
methods, modifying the result of the expansion. With the exception of the reassignment
operators ( = and := ), these operators only affect the expansion of the
parameter without modifying the parameter's value for subsequent expansions.
About
conditional, substring, and substitution parameter expansion operatorsConditional
parameter expansion
Conditional parameter expansion allows branching on whether the parameter is unset, empty,
or has content. Based on these conditions, the parameter can be expanded to its value, a
default value, or an alternate value; throw a customizable error; or reassign the parameter to
a default value. The following table shows the conditional parameter expansions -- each row
shows a parameter expansion using an operator to potentially modify the expansion, with the
columns showing the result of that expansion given the parameter's status as indicated in the
column headers. Operators with the ':' prefix treat parameters with empty values as if they
were unset.
parameter expansion
unset var
var=""
var="gnu"
${var-default}
default
--
gnu
${var:-default}
default
default
gnu
${var+alternate}
--
alternate
alternate
${var:+alternate}
--
--
alternate
${var?error}
error
--
gnu
${var:?error}
error
error
gnu
The = and := operators in the table function identically to - and
:- , respectively, except that the = variants rebind the variable to the result
of the expansion.
As an example, let's try opening a user's editor on a file specified by the OUT_FILE
variable. If either the EDITOR environment variable or our OUT_FILE variable is
not specified, we will have a problem. Using a conditional expansion, we can ensure that when
the EDITOR variable is expanded, we get the specified value or at least a sane
default:
Parameters can be expanded to just part of their contents, either by offset or by removing
content matching a pattern. When specifying a substring offset, a length may optionally be
specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from
the end of the string. Note the parentheses used around the negative offset, which ensure that
Bash does not parse the expansion as having the conditional default expansion operator from
above:
$ location="CA 90095" $ echo "Zip Code: ${location:3}" Zip Code: 90095 $ echo "Zip Code: ${location:(-5)}" Zip Code: 90095 $ echo "State: ${location:0:2}" State: CA
Another way to take a substring is to remove characters from the string matching a pattern,
either from the left edge with the # and ## operators or from the right edge with
the % and % operators. A useful mnemonic is that # appears left of a
comment and % appears right of a number. When the operator is doubled, it matches
greedily, as opposed to the single version, which removes the most minimal set of characters
matching the pattern.
var="open source"
parameter expansion
offset of 5
length of 4
${var:offset}
source
${var:offset:length}
sour
pattern of *o?
${var#pattern}
en source
${var##pattern}
rce
pattern of ?e*
${var%pattern}
open sour
${var%pattern}
o
The pattern-matching used is the same as with filename globbing: * matches zero or
more of any character, ? matches exactly one of any character, [...] brackets
introduce a character class match against a single character, supporting negation ( ^ ),
as well as the posix character classes, e.g. [[:alnum:]] . By excising characters from our
string in this manner, we can take a substring without first knowing the offset of the data we
need:
$ echo $PATH /usr/local/bin:/usr/bin:/bin $ echo "Lowest priority in PATH: ${PATH##*:}" Lowest priority in PATH: /bin $ echo "Everything except lowest priority: ${PATH%:*}" Everything except lowest priority: /usr/local/bin:/usr/bin $ echo "Highest priority in PATH: ${PATH%:*}" Highest priority in PATH: /usr/local/bin
Substitution in parameter expansion
The same types of patterns are used for substitution in parameter expansion. Substitution is
introduced with the / or // operators, followed by two arguments separated by another
/ representing the pattern and the string to substitute. The pattern matching is always
greedy, so the doubled version of the operator, in this case, causes all matches of the pattern
to be replaced in the variable's expansion, while the singleton version replaces only the
leftmost.
var="free and open"
parameter expansion
pattern of [[:space:]]
string of _
${var/pattern/string}
free_and open
${var//pattern/string}
free_and_open
The wealth of parameter expansion modifiers transforms Bash variables and other parameters
into powerful tools beyond simple value stores. At the very least, it is important to
understand how parameter expansion works when reading Bash scripts, but I suspect that not
unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion
modifiers bring to your scripts as well as your interactive sessions. TopicsLinuxAbout the author James
Pannacciulli - James Pannacciulli is an advocate for software freedom & user autonomy with
an MA in Linguistics. Employed as a Systems Engineer in Los Angeles, in his free time he
occasionally gives talks on bash usage at various conferences. James likes his beers sour and
his nettles stinging. More from James may be found on his home page . He has presented at conferences including SCALE ,...
You probably know that when you press the Up arrow key in Bash, you can see and reuse all
(well, many) of your previous commands. That is because those commands have been saved to a
file called .bash_history in your home directory. That history file comes with a bunch of
settings and commands that can be very useful.
First, you can view your entire recent command history by typing history , or
you can limit it to your last 30 commands by typing history 30 . But that's pretty
vanilla. You have more control over what Bash saves and how it saves it.
For example, if you add the following to your .bashrc, any commands that start with a space
will not be saved to the history list:
HISTCONTROL=ignorespace
This can be useful if you need to pass a password to a command in plaintext. (Yes, that is
horrible, but it still happens.)
If you don't want a frequently executed command to show up in your history, use:
HISTCONTROL=ignorespace:erasedups
With this, every time you use a command, all its previous occurrences are removed from the
history file, and only the last invocation is saved to your history list.
A history setting I particularly like is the HISTTIMEFORMAT setting. This will
prepend all entries in your history file with a timestamp. For example, I use:
HISTTIMEFORMAT="%F %T "
When I type history 5 , I get nice, complete information, like this:
That makes it a lot easier to browse my command history and find the one I used two days ago
to set up an SSH tunnel to my home lab (which I forget again, and again, and again
).
Best Bash practices
I'll wrap this up with my top 11 list of the best (or good, at least; I don't claim
omniscience) practices when writing Bash scripts.
Bash scripts can become complicated and comments are cheap. If you wonder whether to add
a comment, add a comment. If you return after the weekend and have to spend time figuring
out what you were trying to do last Friday, you forgot to add a comment.
Wrap all your variable names in curly braces, like ${myvariable} . Making
this a habit makes things like ${variable}_suffix possible and improves
consistency throughout your scripts.
Do not use backticks when evaluating an expression; use the $() syntax
instead. So use:
for file in $(ls); do
not
for file in `ls`; do
The former option is nestable, more easily readable, and keeps the general sysadmin
population happy. Do not use backticks.
Consistency is good. Pick one style of doing things and stick with it throughout your
script. Obviously, I would prefer if people picked the $() syntax over backticks
and wrapped their variables in curly braces. I would prefer it if people used two or four
spaces -- not tabs -- to indent, but even if you choose to do it wrong, do it wrong
consistently.
Use the proper shebang for a Bash script. As I'm writing Bash scripts with the intention
of only executing them with Bash, I most often use #!/usr/bin/bash as my
shebang. Do not use #!/bin/sh or #!/usr/bin/sh . Your script will
execute, but it'll run in compatibility mode -- potentially with lots of unintended side
effects. (Unless, of course, compatibility mode is what you want.)
When comparing strings, it's a good idea to quote your variables in if-statements,
because if your variable is empty, Bash will throw an error for lines like these: if [
${myvar} == "foo" ] ; then
echo "bar"
fi And will evaluate to false for a line like this: if [ " ${myvar} " == "foo" ] ; then
echo "bar"
fi Also, if you are unsure about the contents of a variable (e.g., when you are parsing user
input), quote your variables to prevent interpretation of some special characters and make
sure the variable is considered a single word, even if it contains whitespace.
This is a matter of taste, I guess, but I prefer using the double equals sign (
== ) even when comparing strings in Bash. It's a matter of consistency, and even
though -- for string comparisons only -- a single equals sign will work, my mind immediately
goes "single equals is an assignment operator!"
Use proper exit codes. Make sure that if your script fails to do something, you present
the user with a written failure message (preferably with a way to fix the problem) and send a
non-zero exit code: # we have failed
echo "Process has failed to complete, you need to manually restart the whatchamacallit"
exit 1 This makes it easier to programmatically call your script from yet another script and
verify its successful completion.
Use Bash's built-in mechanisms to provide sane defaults for your variables or throw
errors if variables you expect to be defined are not defined: # this sets the value of $myvar
to redhat, and prints 'redhat'
echo ${myvar:=redhat} # this throws an error reading 'The variable myvar is undefined, dear
reader' if $myvar is undefined
${myvar:?The variable myvar is undefined, dear reader}
Especially if you are writing a large script, and especially if you work on that large
script with others, consider using the local keyword when defining variables
inside functions. The local keyword will create a local variable, that is one
that's visible only within that function. This limits the possibility of clashing
variables.
Every sysadmin must do it sometimes: debug something on a console, either a real one in a
data center or a virtual one through a virtualization platform. If you have to debug a script
that way, you will thank yourself for remembering this: Do not make the lines in your scripts
too long!
On many systems, the default width of a console is still 80 characters. If you need to
debug a script on a console and that script has very long lines, you'll be a sad panda.
Besides, a script with shorter lines -- the default is still 80 characters -- is a lot
easier to read and understand in a normal editor, too!
I truly love Bash. I can spend hours writing about it or exchanging nice tricks with fellow
enthusiasts. Make sure you drop your favorites in the comments!
When you work with computers all day, it's fantastic to find repeatable commands and tag
them for easy use later on. They all sit there, tucked away in ~/.bashrc (or ~/.zshrc for
Zsh users
), waiting to help improve your day!
In this article, I share some of my favorite of these helper commands for things I forget a
lot, in hopes that they will save you, too, some heartache over time.
Say when it's
over
When I'm using longer-running commands, I often multitask and then have to go back and check
if the action has completed. But not anymore, with this helpful invocation of say (this is on
MacOS; change for your local equivalent):
This command marks the start and end time of a command, calculates the minutes it takes, and
speaks the command invoked, the time taken, and the exit code. I find this super helpful when a
simple console bell just won't do.
... ... ...
There are many Docker commands, but there are even more docker compose commands. I used to
forget the --rm flags, but not anymore with these useful aliases:
alias dc =
"docker-compose"
alias dcr = "docker-compose run --rm"
alias dcb = "docker-compose run --rm --build" gcurl helper for Google Cloud
This one is relatively new to me, but it's heavily
documented . gcurl is an alias to ensure you get all the correct flags when using local
curl commands with authentication headers when working with Google Cloud APIs.
Git and
~/.gitignore
I work a lot in Git, so I have a special section dedicated to Git helpers.
One of my most useful helpers is one I use to clone GitHub repos. Instead of having to
run:
Reading this morning on Hacker News was this article on how the old
Internet has died because we trusted all our content to Facebook and Google. While hyperbole
abounds in the headline and there are plenty of internet things out there that aren't owned by
Google nor Facebook (including this AWS free blog) it is true much of the information and
content is in the hands of a giant Ad serving service and a social echo chamber. (well that is
probably too harsh)
I heard this advice many years ago that you should own your own content. While there isn't
much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason
I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on
open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make
changes.
But of course, that is silly! Why not publish on Medium like everyone else? Or publish on
someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to
me, the point is freedom. Freedom to express, do what I want, say what I will with no
restrictions. The ability to own what I say and freedom from others monetizing me directly.
There's no walled garden and anyone can access the content I write in my own little
funzone.
While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy.
In the next decade, whether this blog remains up or is shut down, is not dependent upon the
fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or
not. If I change my views, I can delete it. It won't just sit on the Internet because someone
else's terms of service agreement changed. I am in control, I am in charge. That to me is
important and the reason I run this blog, don't use other people's services, and why I advocate
for owning your own content.
I/O reporting from the Linux command line Learn the iostat tool, its common command-line flags and options, and how to
use it to better understand input/output performance in Linux.
If you have followed my posts here at Enable Sysadmin, you know that I previously worked as a storage support engineer. One of
my many tasks in that role was to help customers replicate backups from their production environments to dedicated backup storage
arrays. Many times, customers would contact me concerned about the speed of the data transfer from production to storage.
Now, if you have ever worked in support, you know that there can be many causes for a symptom. However, the throughput of a system
can have huge implications for massive data transfers. If all is well, we are talking hours, if not... I have seen a single replication
job take months.
We know that Linux is loaded full of helpful tools for all manner of issues. For input/output monitoring, we use the iostat
command. iostat is a part of the sysstat package and is not loaded on all distributions by default.
Installation and base run
I am using Red Hat Enterprise Linux 8 here and have included the install output below.
[ Want to try out Red Hat Enterprise Linux?
Download it now for free. ]
NOTE : the command runs automatically after installation.
[root@rhel ~]# iostat
bash: iostat: command not found...
Install package 'sysstat' to provide command 'iostat'? [N/y] y
* Waiting in queue...
The following packages have to be installed:
lm_sensors-libs-3.4.0-21.20180522git70f7e08.el8.x86_64 Lm_sensors core libraries
sysstat-11.7.3-2.el8.x86_64 Collection of performance monitoring tools for Linux
Proceed with changes? [N/y] y
* Waiting in queue...
* Waiting for authentication...
* Waiting in queue...
* Downloading packages...
* Requesting data...
* Testing changes...
* Installing packages...
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test) 06/17/2020 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.17 0.05 4.09 0.65 0.00 83.03
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 206.70 8014.01 1411.92 1224862 215798
sdc 0.69 20.39 0.00 3116 0
sdb 0.69 20.39 0.00 3116 0
dm-0 215.54 7917.78 1449.15 1210154 221488
dm-1 0.64 14.52 0.00 2220 0
If you run the base command without options, iostat displays CPU usage information. It also displays I/O stats for
each partition on the system. The output includes totals, as well as per second values for both read and write operations. Also,
note that the tps field is the total number of Transfers per second issued to a specific device.
The practical application is this: if you know what hardware is used, then you know what parameters it should be operating within.
Once you combine this knowledge with the output of iostat , you can make changes to your system accordingly.
Interval runs
It can be useful in troubleshooting or data gathering phases to have a report run at a given interval. To do this, run the command
with the interval (in seconds) at the end:
-p allows you to specify a particular device to focus in on. You can combine this option with the -m
for a nice and tidy look at a particularly concerning device and its partitions.
avgqu-sz - average queue length of a request issued to the device
await - average time for I/O requests issued to the device to be served (milliseconds)
r_await - average time for read requests to be served (milliseconds)
w_await - average time for write requests to be served (milliseconds)
There are other values present, but these are the ones to look out for.
Shutting down
This article covers just about everything you need to get started with iostat . If you have other questions or need
further explanations of options, be sure to check out the man page or your preferred search engine. For other Linux tips and tricks,
keep an eye on Enable Sysadmin!
These commands can tell you what key bindings you have in your bash shell by default.
bind -P | grep 'can be'
stty -a | grep ' = ..;'
Background
I'd aways wondered what key strokes did what in bash – I'd picked up some well-known
ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always
wondered whether there was a list of these I could easily get and comprehend. I found some, but
always forgot where it was when I needed them, and couldn't remember many of them anyway.
Then debugging a problem tab completion in 'here' documents, I stumbled across
bind.
bind and stty
'bind' is a bash builtin, which means it's not a program like awk or grep, but is picked up
and handled by the bash program itself.
It manages the various key bindings in the bash shell, covering everything from autocomplete
to transposing two characters on the command line. You can read all about it in the bash man
page (in the builtins section, near the end).
Bind is not responsible for all the key bindings in your shell – running the stty will
show the ones that apply to the terminal:
stty -a | grep ' = ..;'
These take precedence and can be confusing if you've tried to bind the same thing in your
shell! Further confusion is caused by the fact that '^D' means 'CTRL and d pressed together
whereas in bind output, it would be 'C-d'.
edit: am indebted to joepvd from hackernews for this beauty
Can be considered (almost) equivalent to a more instructive command:
bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can
'bind -l' lists all the available keystroke functions. For example, 'complete' is the
auto-complete function normally triggered by hitting 'tab' twice. The output of this is passed
to a sed command which passes each function name to 'bind -q', which queries the bindings.
sed 's/.*/bind -q /'
The output of this is passed for running into /bin/bash.
/bin/bash 2>&1 | grep -v warning: | grep 'can be'
Note that this invocation of bash means that locally-set bindings will revert to the default
bash ones for the output.
The '2>&1' puts the error output (the warnings) to the same output channel, filtering
out warnings with a 'grep -v' and then filtering on output that describes how to trigger the
function.
In the output of bind -q, 'C-' means 'the ctrl key and'. So 'C-c' is the normal. Similarly,
't' means 'escape', so 'tt' means 'autocomplete', and 'e' means escape:
$ bind -q complete
complete can be invoked via "C-i", "ee".
and is also bound to 'C-i' (though on my machine I appear to need to press it twice –
not sure why).
Add to bashrc
I added this alias as 'binds' in my bashrc so I could easily get hold of this list in the
future.
alias binds="bind -P | grep 'can be'"
Now whenever I forget a binding, I type 'binds', and have a read :)
[adinserter block="1″]
The Zinger
Browsing through the bash manual, I noticed that an option to bind enables binding to
-x keyseq:shell-command
So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):
bind -x '"C-xC-o":bind -P | grep can'
Of course, you can bind to a single key if you want, and any command you want. You could
also use this for practical jokes on your colleagues
Now I'm going to sort through my history to see what I type most often :)
This post is based on material from Docker in Practice ,
available on Manning's Early Access Program. Get 39% off with the code: 39miell
I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not
involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use
it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing
with them can give a much deeper understanding of what's going on and ultimately lead to better code.
My favourite secret weapon in this path to understanding is strace.
strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS)
calls your program is making.
An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the
things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc)
its use has a very high hit rate in resolving problems out of developers' normal problem space.
Usage Patterns
strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.
My Netcat Server Won't Start!
Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source,
and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.
Simply running through strace will likely give you clues as to what's gone on.
$ nc -l localhost 80
nc: Permission denied
Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).
Simply put strace at the front of your command. Note that the following output has been heavily edited for space
reasons (deep breath):
To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse
when a few things are explained.
For each line:
the first entry on the left is the system call being performed
the bit in the parentheses are the arguments to the system call
the right side of the equals sign is the return value of the system call
open("/etc/gai.conf", O_RDONLY) = 3
Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf
and the constant O_RDONLY , and the return value was 3 .
How to make sense of this?
Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above
line is the attempt to open a file with read-only permission.
In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that
return "Permission denied":
We might therefore want to understand what "bind" is and why it might be failing.
You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is
in the manpages-dev package, and can be invoked by eg man 2 bind (I just used strace to
determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!).
You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs
that match your installation.
Right there in my man 2 bind page it says:
ERRORS
EACCES The address is protected, and the user is not the superuser.
So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.
My Library Is Not Loading!
Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the
output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:
We observe that the file is found in what looks like an unusual place.
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
Inspecting the environment, we see that:
$ env | grep myperl
PERL5LIB=/space/myperllib
So the solution is to set the same env variable before running:
export PERL5LIB=/space/myperllib
Get to know the internals bit by bit
If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things
about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a
look at the file /etc/gai.conf , which I'd never come across before writing this.
Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens &
Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully)
browse C code more and more understanding will grow.
Gotchas
If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes
and straces them. -ff creates a separate file with the pid suffixed to the name.
If you're on solaris, this program doesn't exist – you need to use truss instead.
Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies
(on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the
executable up.
Other useful tidbits
You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with
-p .
If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and
-T ) can help significantly.
A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking
if a config file exists.
exit takes only integer args in the range 0 - 255 (see
first footnote)
128+n
Fatal error signal "n"
kill -9$PPID of script
$? returns 137 (128 + 9)
130
Script terminated by Control-C
Ctl-C
Control-C is fatal error signal 2 , (130 = 128 + 2, see
above)
255*
Exit status out of range
exit -1
exit takes only integer args in the range 0 - 255
According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings,
and should therefore be avoided for user-specified exit parameters. Ending a script with
exit 127 would certainly cause confusion when troubleshooting (is the error code a
"command not found" or a user-defined one?). However, many scripts use an exit 1 as a
general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not
particularly useful in debugging.
There has been an attempt to systematize exit status numbers (see
/usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar
standard for scripting might be appropriate. The author of this document proposes restricting
user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with
the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more
straightforward. [2] All user-defined exit
codes in the accompanying examples to this document conform to this standard, except where
overriding circumstances exist, as in Example 9-2 .
Issuing a $? from the
command-line after a shell script exits gives results consistent with the table above only
from the Bash or sh prompt. Running the C-shell or tcsh may give
different values in some cases.
Out of range exit values can result in unexpected
exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit
3809 gives an exit code of 225 (3809 % 256 = 225).
An update of /usr/include/sysexits.h
allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range
of unallotted exit codes will be further restricted in the future. The author of this
document will not do fixups on the scripting examples to conform to the changing
standard. This should not cause any problems, since there is no overlap or conflict in
usage of exit codes between compiled C/C++ binaries and shell scripts.
From bash manual: The exit status of an executed command is the value returned by the waitpid system
call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125
specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances,
the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure.
This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to
indicate various failure modes. When a command terminates on a fatal signal whose number is N,
Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not
executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional
Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by
the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid
options or missing arguments.
Not everyone knows that every time you run a shell command in bash, an 'exit code' is
returned to bash.
Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't
succeed, you get a non-zero code.
1 is a 'general error', and others can give you more information (e.g. which
signal killed it, for example). 255 is upper limit and is "internal error"
grep joeuser /etc/passwd # in case of success returns 0, otherwise 1
or
grep not_there /dev/null
echo $?
$? is a special bash variable that's set to the exit code of each command after
it runs.
Grep uses exit codes to indicate whether it matched or not. I have to look up every time
which way round it goes: does finding a match or not return 0 ?
Bash functions, unlike functions in most programming languages do not allow you to return a
value to the caller. When a bash function ends its return value is its status: zero for
success, non-zero for failure. To return values, you can set a global variable with the result,
or use command substitution, or you can pass in the name of a variable to use as the result
variable. The examples below describe these different mechanisms.
Although bash has a return statement, the only thing you can specify with it is the
function's status, which is a numeric value like the value specified in an exit
statement. The status value is stored in the $? variable. If a function does not
contain a return statement, its status is set based on the status of the last
statement executed in the function. To actually return arbitrary values to the caller you must
use other mechanisms.
The simplest way to return a value from a bash function is to just set a global variable to
the result. Since all variables in bash are global by default this is easy:
function myfunc()
{
myresult='some value'
}
myfunc
echo $myresult
The code above sets the global variable myresult to the function result. Reasonably
simple, but as we all know, using global variables, particularly in large programs, can lead to
difficult to find bugs.
A better approach is to use local variables in your functions. The problem then becomes how
do you get the result to the caller. One mechanism is to use command substitution:
function myfunc()
{
local myresult='some value'
echo "$myresult"
}
result=$(myfunc) # or result=`myfunc`
echo $result
Here the result is output to the stdout and the caller uses command substitution to capture
the value in a variable. The variable can then be used as needed.
The other way to return a value is to write your function so that it accepts a variable name
as part of its command line and then set that variable to the result of the function:
function myfunc()
{
local __resultvar=$1
local myresult='some value'
eval $__resultvar="'$myresult'"
}
myfunc result
echo $result
Since we have the name of the variable to set stored in a variable, we can't set the
variable directly, we have to use eval to actually do the setting. The eval
statement basically tells bash to interpret the line twice, the first interpretation above
results in the string result='some value' which is then interpreted once more and ends
up setting the caller's variable.
When you store the name of the variable passed on the command line, make sure you store it
in a local variable with a name that won't be (unlikely to be) used by the caller (which is why
I used __resultvar rather than just resultvar ). If you don't, and the caller
happens to choose the same name for their result variable as you use for storing the name, the
result variable will not get set. For example, the following does not work:
function myfunc()
{
local result=$1
local myresult='some value'
eval $result="'$myresult'"
}
myfunc result
echo $result
The reason it doesn't work is because when eval does the second interpretation and
evaluates result='some value' , result is now a local variable in the
function, and so it gets set rather than setting the caller's result variable.
For more flexibility, you may want to write your functions so that they combine both result
variables and command substitution:
function myfunc()
{
local __resultvar=$1
local myresult='some value'
if [[ "$__resultvar" ]]; then
eval $__resultvar="'$myresult'"
else
echo "$myresult"
fi
}
myfunc result
echo $result
result2=$(myfunc)
echo $result2
Here, if no variable name is passed to the function, the value is output to the standard
output.
Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend
of Linux Journal since the early 2000s.
The only real issue I see with returning via echo is that forking the process means no
longer allowing it access to set 'global' variables. They are still global in the sense that
you can retrieve them and set them within the new forked process, but as soon as that process
is done, you will not see any of those changes.
e.g.
#!/bin/bash
myGlobal="very global"
call1() {
myGlobal="not so global"
echo "${myGlobal}"
}
tmp=$(call1) # keep in mind '$()' starts a new process
echo "${tmp}" # prints "not so global"
echo "${myGlobal}" # prints "very global"
i would caution against returning integers with "return $int". My code was working fine
until it came across a -2 (negative two), and treated it as if it were 254, which tells me
that bash functions return 8-bit unsigned ints that are not protected from overflow
A function behaves as any other Bash command, and indeed POSIX processes. That is, they
can write to stdout, read from stdin and have a return code. The return code is, as you have
already noticed, a value between 0 and 255. By convention 0 means success while any other
return code means failure.
This is also why Bash "if" statements treat 0 as success and non+zero as failure (most
other programming languages do the opposite).
Readline is one of those technologies that is so commonly used many users don't realise it's there.
I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a
primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with
it over the years.
Bash Without Readline
First you're going to see what bash looks like without readline.
In your 'normal' bash shell, hit the TAB key twice. You should see something like this:
Display all 2335 possibilities? (y or n)
That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap
tab twice.
Hit n to get out of that autocomplete.
Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands
should be brought back to the command line.
Now type:
$ bash --noediting
The --noediting flag starts up bash without the readline library enabled.
If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab
direct to the screen, moving your cursor along. Autocomplete has gone.
Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the
up or down arrows as you did above to see that that no longer works as well.
Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:
$ exit
Other Shortcuts
There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most
commonly-used of these before explaining how you can find out more.
$ echo 'some command'
There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your
line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.
If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of
the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one
conventional way to represent this kind of input. The \C represents the control key, and the -a represents
that the a key is depressed at the same time.
Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I
use these two dozens of times a day.
Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.
The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r ,
and then type ec . You should see the echo command you just ran like this:
(reverse-i-search)`ec': echo echo
Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that
you've input before (if you've only got one echo command in your history then you will only see one). As you see them
you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.
There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind`
to Show Readline Shortcuts
If you type:
$ bind -p
You will see a list of bindings that readline is capable of. There's a lot of them!
Have a read through if you're interested, but don't worry about understanding them all yet.
If you type:
$ bind -p | grep C-a
you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.
As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.
If you want to look through the entirety of the bind -p output, then you will want to know that \M refers
to the Meta key (which you might also know as the Alt key), and \e refers to the Esc
key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you
hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also
tries to auto-complete the command you are typing. This is documented as:
"\e?": possible-completions
in the bind -p output.
Readline and Terminal Options
If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked
at earlier:
"\C-r": reverse-search-history
You might also have seen that there is another binding that allows you to search forward through your history too:
"\C-s": forward-search-history
What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly
past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.
Watch out though! Hitting \C-s to search forward through the history might well not work for you.
Why is this, if the binding is there and readline is switched on?
It's because something picked up the \C-sbefore it got to the readline library: the terminal settings.
The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before
readline gets to see it.
You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will
pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key
that we previously represented with a \C .
If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one
way of describing these key combinations.
If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you
can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.
If you are in the same situation, type:
$ stty stop undef
Now, if you re-run stty -e , the last two lines might look like this:
[...]
min quit reprint start status stop susp time werase
1 ^\ ^R ^Q ^T <undef> ^Z 0 ^W
where the stop entry now has <undef> underneath it.
Strangely, for me C-r is also bound to 'reprint' above ( ^R ).
But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't
been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.
\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it
on a bash shell, the bash shell you are in will close.
Finally, \C-w deletes the word before the cursor
These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.
You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support
readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus'
program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't
go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.
I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how
advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline
commands sourced at startup!
I do not know exactly when or how the inputrc is read.
This blog post is the second of two
covering some practical tips and tricks to get the most out of the Bash shell. In
part
one
, I covered history, last argument, working with files and directories, reading files, and Bash functions.
In this segment, I cover shell variables, find, file descriptors, and remote operations.
Use shell variables
The Bash variables are set by the shell
when invoked. Why would I use
hostname
when
I can use $HOSTNAME, or why would I use
whoami
when
I can use $USER? Bash variables are very fast and do not require external applications.
These are a few frequently-used variables:
$PATH
$HOME
$USER
$HOSTNAME
$PS1
..
$PS4
Use the
echo
command
to expand variables. For example, the $PATH shell variable can be expanded by running:
The
find
command
is probably one of the most used tools within the Linux operating system. It is extremely useful in interactive
shells. It is also used in scripts. With
find
I
can list files older or newer than a specific date, delete them based on that date, change permissions of files or
directories, and so on.
While the above commands will delete files
older than 30 days, as written, they fork the
rm
command
each time they find a file. This search can be written more efficiently by using
xargs
:
In the Bash shell, file descriptors (FDs)
are important in managing the input and output of commands. Many people have issues understanding file descriptors
correctly. Each process has three default file descriptors, namely:
Code
Meaning
Location
Description
0
Standard input
/dev/stdin
Keyboard, file, or some stream
1
Standard output
/dev/stdout
Monitor, terminal, display
2
Standard error
/dev/stderr
Non-zero exit codes are usually >FD2, display
Now that you know what the default FDs do,
let's see them in action. I start by creating a directory named
foo
,
which contains
file1
.
$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1
The output
No
such file or directory
goes to Standard Error (stderr) and is also displayed on the screen. I will run the
same command, but this time use
2>
to
omit stderr:
$> ls foo/ bar/ 2>/dev/null
foo/:
file1
It is possible to send the output of
foo
to
Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:
$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1
Then:
$> cat ls_out_file
foo:
file1
The following command sends stdout to a
file and stderr to
/dev/null
so
that the error won't display on the screen:
echo "Hello World" Go to file because FD 1 now points to the file
exec 1>&3 Copy FD 3 back to 1 (swap)
Three>&- Close file descriptor three (we don't need it
anymore)
Often it is handy to group commands, and
then send the Standard Output to a single file. For example:
$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world
As you can see, only "Hello world" is
printed on the screen, but the output of the failed commands is written to the to_stderr file.
Execute remote operations
I use Telnet, netcat, Nmap, and other
tools to test whether a remote service is up and whether I can connect to it. These tools are handy, but they
aren't installed by default on all systems.
Fortunately, there is a simple way to test
a connection without using external tools. To see if a remote server is running a web, database, SSH, or any other
service, run:
If the connection fails, the
Failed
to connect
message is displayed on your screen.
Assume
serverA
is
behind a firewall/NAT. I want to see if the firewall is configured to allow a database connection to
serverA
,
but I haven't installed a database server yet. To emulate a database port (or any other port), I can use the
following:
There are many other complex actions I can
perform on the remote host.
Wrap up
There is certainly more to Bash than I was
able to cover in this two-part blog post. I am sharing what I know and what I deal with daily. The idea is to
familiarize you with a few techniques that could make your work less error-prone and more fun.
[ Want to test your sysadmin skills? Take a
skills
assessment
today. ]
Valentin Bajrami
Valentin is a system engineer with more than six years
of experience in networking, storage, high-performing clusters, and automation. He is involved in different open source
projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators.
More
about me
In the Bash shell, file descriptors (FDs) are important in managing the input and output of
commands. Many people have issues understanding file descriptors correctly. Each process has
three default file descriptors, namely:
Code
Meaning
Location
Description
0
Standard input
/dev/stdin
Keyboard, file, or some stream
1
Standard output
/dev/stdout
Monitor, terminal, display
2
Standard error
/dev/stderr
Non-zero exit codes are usually >FD2, display
Now that you know what the default FDs do, let's see them in action. I start by creating a
directory named foo , which contains file1 .
$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1
The output No such file or directory goes to Standard Error (stderr) and is also
displayed on the screen. I will run the same command, but this time use 2> to
omit stderr:
$> ls foo/ bar/ 2>/dev/null
foo/:
file1
It is possible to send the output of foo to Standard Output (stdout) and to a
file simultaneously, and ignore stderr. For example:
$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1
Then:
$> cat ls_out_file
foo:
file1
The following command sends stdout to a file and stderr to /dev/null so that
the error won't display on the screen:
There's a great quote from Andy Grove, founder of Intel about training:
Training is the manager's job. Training is the highest leverage activity a
manager can do to increase the output of an organization. If a manager spends 12 hours
preparing training for 10 team members that increases their output by 1% on average, the
result is 200 hours of increased output from the 10 employees (each works about 2000 hours
a year). Don't leave training to outsiders, do it yourself.
And training isn't just about being in a room and explaining things to people – it's
about getting in the field and showing people how to respond to problems, how to think about
things, and where they need to go next. The point is: take ownership of it.
I personally trained people in things like Git and Docker and basic programming whenever I
got the chance to. This can demystify these skills and empower your staff to go further. It
also sends a message about what's important – if the boss spends time on triage, training
and hiring, then they must be important.
I already talked about this in the
previous post , but every subsequent attempt I made to get a practice of writing runbooks
going was hard going. No-one ever argues with the logic of efficiency and saved time, but when
it comes to putting the barn up, pretty much everyone is too busy with something else to
help.
Looking at the history of these kind of efforts ,
it seems that people need to be forced – against their own natures – into following
these best practices that invest current effort for future operational benefit.
Boeing and checklists ("planes are falling from the sky – no matter how good the
pilots!")
Construction and standard project plans ("falling building are unacceptable, we need a
set of build patterns to follow and standards to enforce")
Medicine and 'pre-flight checklists' ("we're getting sued every time a surgeon makes a
mistake, how can we reduce these?")
In the case of my previous post, it was frustration for me at being on-call that led me to
spend months writing up runbooks. The main motivation that kept me going was that it would be
(as a minimal positive outcome) for my own benefit . This intrinsic motivation got the
ball rolling, and the effort was then sustained and developed by the other three more
process-oriented factors.
There's a commonly-seen pattern here:
you need some kind of spontaneous intrinsic motivation to get something going and
snowball, and then
a bureaucratic machine behind it to sustain it
If you crack how to do that reliably, then you're going to be pretty good at building
businesses.
It Doesn't Always Help
That wasn't the only experience I had trying to spread what I thought was good practice. In
other contexts, I learned, the application of these methods was unhelpful.
In my next job, I worked on a new and centralised fast-changing system in a large org, and
tried to write helpful docs to avoid repeating solving the same issues over and over. Aside
from the authority and 'critical mass' problems outlined above, I hit a further one: the system
was changing too fast for the learnings to be that useful. Bugs were being fixed quickly
(putting my docs out of date similarly quickly) and new functionality was being added, leading
to substantial wasted effort and reduced benefit.
Discussing this with a friend, I was pointed at a framework that already existed called
Cynefin that had
already thought about classifying these differences of context, and what was an appropriate
response to them. Through that lens, my mistake had been to try and impose what might be best
practice in a 'Complicated'/'Clear' context to a context that was 'Chaotic'/'Complex'.
'Chaotic' situations are too novel or under-explored to be susceptible to standard processes.
Fast action and equally fast evaluation of system response is required to build up practical
experience and prepare the way for later stabilisation.
'Why Don't You Just Automate
It?'
I get this a lot. It's an argument that gets my goat, for several reasons.
Runbooks
are a useful first step to an automated solution
If a runbook is mature and covers its ground well, it serves as an almost perfect design
document for any subsequent automation solution. So it's in itself a useful precursor to
automation for any non-trivial problem.
Automation is difficult and expensive
It is never free. It requires maintenance. There are always corner cases that you may not
have considered. It's much easier to write: 'go upstairs' than build a robot that climbs stairs .
Automation
tends to be context-specific
If you have a wide-ranging set of contexts for your problem space, then a runbook provides
the flexibility to applied in any of these contexts when paired with a human mind. For example:
your shell script solution will need to reliably cater for all these contexts to be useful; not
every org can use your Ansible recipe; not every network can access the
internet.
All my thoughts on this subject so far have been predicated on writing proprietary runbooks
that are consumed and maintained within an organisation.
What I never considered was gaining the critical mass needed by open sourcing runbooks, and
asking others to donate theirs so we can all benefit from each others' experiences.
So we at
Container Solutions have decided to open source the
runbooks we have built up that are generally applicable to the community. They are growing
all the time, and we will continue to add to them.
Call for Runbooks
We can't do this alone, so are asking for your help!
If you have any runbooks that you can donate to the cause lying around in your wikis,
please send them in
If you want to write a new runbook, let us know
If you want to request a runbook on a particular subject, suggest it
> We ended up embedding these dashboard within Confluence runbooks/playbooks
followed by diagnosing/triaging, resolving, and escalation information. We also
ended up associating these runbooks/playbooks with the alerts and had the links
outputted into the operational chat along with the alert in question so people
could easily follow it back.
When I used to work for Amazon, as a developer, I was required to write a
playbook for every microservice I developed. The playbook had to be so detailed
that, in theory, any site reliability engineer, who has no knowledge of the service
should be able to read the playbook and perform the following activities:
- Understand what the service does.
- Learn all the curl commands to run to test each service component in isolation
and see which ones are not behaving as expected.
- Learn how to connect to the actual physical/virtual/cloud systems that keep
the service running.
- Learn which log files to check for evidence of problems.
- Learn which configuration files to edit.
- Learn how to restart the service.
- Learn how to rollback the service to an earlier known good version.
- Learn resolution to common issues seen earlier.
- Perform a checklist of activities to be performed to ensure all components are
in good health.
- Find out which development team of ours to page if the issue remains
unresolved.
It took a lot of documentation and excellent organization of such documentation
to keep the services up and running.
twic on Feb 26,
2018 [–]
A far-out old employer of mine decided that their standard format for alerts, sent
by applications to the central monitoring system, would include a field for a URL
pointing to some relevant documentation.
I think this was mostly pushed through by sysadmins annoyed at getting alerts
from new applications that didn't mean anything to them.
peterwwillis on Feb 26, 2018
[–]
When you get an alert, you have to first understand the alert, and then you have to
figure out what to do about it. The majority of alerts, when people don't craft
them according to a standard/policy, look like this:
Subject: Disk usage high
Priority: High
Message:
There is a problem in cluster ABC.
Disk utilization above 90%.
Host 1.2.3.4.
It's a pain in the ass to go figure out what is actually affected, why it's happening, and
track down some kind of runbook that describes how to fix this specific case (because it may vary
from customer to customer, not to mention project to project). This is usually the state of alerts
until a single person (who isn't a manager; managers hate cleaning up inefficiencies) gets so sick
and tired of it that they take the weekend to overhaul one alert at a time to provide better
insight as to what is going on and how to fix it. Any attempt to improve docs for those alerts are
never updated by anyone but this lone individual.
Providing a link to a runbook makes resolving issues a lot faster. It's even
better if the link is to a Wiki page, so you can edit it if the runbook isn't up to
date.
Basically you are saying you were required to be really diligent about the
playbooks and put effort in to get them right.
Did people really put that effort in? Was it worth it? If so, what elments of
the culture/organisation/process made people do the right thing when it is so much
easier for busy people to get sloppy?
Regarding the question about culture, yes, busy people often get sloppy. But
when a P1 alert comes because a site reliability engineer could not resolve the
issue by following the playbook, it looks bad on the team and a lot of questions
are asked by all affected stakeholders (when a service goes down in Amazon it may
affect multiple other teams) about why the playbook was deficient. Nobody wants to
be in a situation like this. In fact, no developer wants to be woken up at 2 a.m.
because a service went down and the issue could not be fixed by the on-call SRE. So
it is in their interest to write good and detailed playbooks.
zwischenzug on Feb 26, 2018
[–]
That sounds like a great process there. It staggers me how much people a)
underestimate the investment required to maintain that kind of documentation, and b)
underestimate how much value it brings. It's like brushing your teeth.
peterwwillis on Feb 26, 2018
[–]
> It's far more important to have a ticketing system that functions reliably and
supports your processes than the other way round.
The most efficient ticketing systems I have ever seen were heavily customized
in-house. When they moved to a completely different product, productivity in
addressing tickets plummeted. They stopped generating tickets to deal with it.
> After process, documentation is the most important thing, and the two are
intimately related.
If you have two people who are constantly on call to address issues because
nobody else knows how to deal with it, you are a victim of a lack of documentation.
Even a monkey can repair a space shuttle if they have a good manual.
I partly rely on incident reports and issues as part of my documentation.
Sometimes you will get an issue like "disk filling up", and maybe someone will
troubleshoot it and resolve it with a summary comment of "cleaned up free space in
X process". Instead of making that the end of it, create a new issue which
describes the problem and steps to resolve in detail. Update the issue over time as
necessary. Add a tag to the issue called 'runbook'. Then mark related issues as
duplicates of this one issue. It's kind of horrible, but it seamlessly integrates
runbooks with your issue tracking.
I would like to point out that the dependency chain for repairing the space
shuttle (or worse: microservices) can turn the need for understanding (or
authoring) one document into understanding 12+ documents, or run the risk of making
a document into a "wall of text," copy-paste hell, and/or out-of-date.
Capturing the contextual knowledge required to make an administration task
straight-forward can easily turn the forest into the trees.
I would almost rather automate the troubleshooting steps than to have to write
sufficiently specific English to express what one should do in given situations,
with the caveat that such automation takes longer to write than said
automation.
zwischenzug on Feb 26, 2018
[–]
Yeah, that's exactly what we found - we created a JIRA project called 'DOCS', which
made search trivial:
'docs disk filling up'
tabtab on
Feb 26, 2018 [–]
It's pretty much organizing 101: study situation, plan, track well, document well but
in a practical sense (write docs that people will actually read), get feedback from
everybody, learn from your mistakes, admit your mistakes, and make the system and
process better going forward.
stareatgoats on Feb 26, 2018
[–]
I may be out of touch with current affairs, but I don't think I've encountered a
single workplace where documentation has worked. Sometimes because because people
were only hired to put out fires, sometimes because there was no sufficiently
customized ticketing system, sometimes because they simply didn't know how to
abstract their tasks into well written documents.
And in many cases because people thought they might be out of a job if they put
their solutions in print. I'm guessing managers still need to counter those
tendencies actively if they want documentation to happen. Plenty of good pointers
in this article, I found.
This is the kicker - and the rarity. I don't think it's all trust, though. When
your boss already knows going into Q1 that he's going to be fired in Q2 if he
doesn't get 10 specific (and myopically short-term) agenda items addressed, it
doesn't matter how much he trusts you, you're going to be focusing on only the
things that have the appearance of ROI after a few hours of work, no matter how
inefficient they are in the long term.
The following will redirect program error message to a file called error.log: $ program-name 2> error.log
$ command1 2> error.log
For example, use the grep command for
recursive search in the $HOME directory and redirect all errors (stderr) to a file name
grep-errors.txt as follows: $ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
$ cat /tmp/grep-errors.txt
Sample outputs:
grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address
grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory
grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory
grep: /home/vivek/.byobu/.ssh-agent: No such device or address
Redirecting the standard error (stderr) and stdout to file
Use the following syntax: $ command-name &>file
We can als use the following syntax: $ command > file-name 2>&1
We can write both stderr and stdout to two different files too. Let us try out our previous
grep command example: $ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
$ cat /tmp/grep-outputs.txt
Redirecting stderr to stdout to a file or another
command
Here is another useful example where both stderr and stdout sent to the more command instead
of a file: # find /usr/home -name .profile 2>&1 | more
Redirect stderr to
stdout
Use the command as follows: $ command-name 2>&1
$ command-name > file.txt 2>&1
## bash only ##
$ command2 &> filename
$ sudo find / -type f -iname ".env" &> /tmp/search.txt
Redirection takes from left to right. Hence, order matters. For example: command-name 2>&1 > file.txt ## wrong ##
command-name > file.txt 2>&1 ## correct ##
How to redirect stderr to
stdout in Bash script
A sample shell script used to update VM when created in the AWS/Linode server:
#!/usr/bin/env bash
# Author - nixCraft under GPL v2.x+
# Debian/Ubuntu Linux script for EC2 automation on first boot
# ------------------------------------------------------------
# My log file - Save stdout to $LOGFILE
LOGFILE="/root/logs.txt"
# My error file - Save stderr to $ERRFILE
ERRFILE="/root/errors.txt"
# Start it
printf "Starting update process ... \n" 1>"${LOGFILE}"
# All errors should go to error file
apt-get -y update 2>"${ERRFILE}"
apt-get -y upgrade 2>>"${ERRFILE}"
printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}"
shutdown -r now 2>>"${ERRFILE}"
Our last example uses the exec command and FDs along with trap and custom bash
functions:
#!/bin/bash
# Send both stdout/stderr to a /root/aws-ec2-debian.log file
# Works with Ubuntu Linux too.
# Use exec for FD and trap it using the trap
# See bash man page for more info
# Author: nixCraft under GPL v2.x+
# ---------------------------------------------
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/root/aws-ec2-debian.log 2>&1
# log message
log(){
local m="$@"
echo ""
echo "*** ${m} ***"
echo ""
}
log "$(date) @ $(hostname)"
## Install stuff ##
log "Updating up all packages"
export DEBIAN_FRONTEND=noninteractive
apt-get -y clean
apt-get -y update
apt-get -y upgrade
apt-get -y --purge autoremove
## Update sshd config ##
log "Configuring sshd_config"
sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config
## Hide process from other users ##
log "Update /proc/fstab to hide process from each other"
echo 'proc /proc proc defaults,nosuid,nodev,noexec,relatime,hidepid=2 0 0' >> /etc/fstab
## Install LXD and stuff ##
log "Installing LXD/wireguard/vnstat and other packages on this box"
apt-get -y install lxd wireguard vnstat expect mariadb-server
log "Configuring mysql with mysql_secure_installation"
SECURE_MYSQL_EXEC=$(expect -c "
set timeout 10
spawn mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"$MYSQL\r\"
expect \"Change the root password?\"
send \"n\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"y\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof
")
# log to file #
echo " $SECURE_MYSQL_EXEC "
# We no longer need expect
apt-get -y remove expect
# Reboot the EC2 VM
log "END: Rebooting requested @ $(date) by $(hostname)"
reboot
WANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?
Try the tee command as follows: command1 2>&1 | tee filename
Here is how to use it insider shell script too:
In this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr.
We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash
man page here
:
Operator
Description
Examples
command>filename
Redirect stdout to file "filename."
date > output.txt
command>>filename
Redirect and append stdout to file "filename."
ls -l >> dirs.txt
command 2>filename
Redirect stderr to file "filename."
du -ch /snaps/ 2> space.txt
command 2>>filename
Redirect and append stderr to file "filename."
awk '{ print $4}' input.txt 2>> data.txt
command &>filename
command >filename 2>&1
Redirect both stdout and stderr to file "filename."
grep -R foo /etc/ &>out.txt
command &>>filename
command >>filename 2>&1
Redirect both stdout and stderr append to file "filename."
whois domain &>>log.txt
Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a
trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on
SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly
email newsletter . RELATED TUTORIALS
because tee log's everything and prints to stdout . So you stil get to see everything!
You can even combine sudo to downgrade to a log user account and add date's subject and
store it in a default log directory :)
Whether we want it or not, bash is the
shell you face in Linux, and unfortunately, it is often misunderstood and misused. Issues
related to creating your bash environment are not well addressed in existing books. This book
fills the gap.
Few authors understand that bash is a complex, non-orthogonal language operating in a
complex Linux environment. To make things worse, bash is an evolution of Unix shell and is a
rather old language with warts and all. Using it properly as a programming language requires a
serious study, not just an introduction to the basic concepts. Even issues related to
customization of dotfiles are far from trivial, and you need to know quite a bit to do it
properly.
At the same time, proper customization of bash environment does increase your productivity
(or at least lessens the frustration of using Linux on the command line ;-)
The author covered the most important concepts related to this task, such as bash history,
functions, variables, environment inheritance, etc. It is really sad to watch like the majorly
of Linux users do not use these opportunities and forever remain on the "level zero" using
default dotfiles with bare minimum customization.
This book contains some valuable tips even for a seasoned sysadmin (for example, the use of
!& in pipes), and as such, is worth at least double of suggested price. It allows you
intelligently customize your bash environment after reading just 160 pages and doing the
suggested exercises.
set s we saw before
, but shopt s look very similar. Just inputting shopt shows a bunch of options:
$ shopt
cdable_vars off
cdspell on
checkhash off
checkwinsize on
cmdhist on
compat31 off
dotglob off
I found a set of answers here
. Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as
another way to set extra shell options. But I'm still unsure if you know the answer, let me know.
4) Here Docs and Here Strings
'Here docs' are files created inline in the shell.
The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a
file.
Type this:
$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc
Notice that:
the string could be included in the file if it was not 'alone' on the line
the string SOMEENDSTRING is more normally END , but that is just convention
Lesser known is the 'here string':
$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation
You may have written code like this before, where you use tools like sed to manipulate strings:
$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS
But you may not be aware that this is possible natively in bash .
This means that you can dispense with lots of sed and awk shenanigans.
One way to rewrite the above is:
$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS
The # means 'match and remove the following pattern from the start of the string'
The % means 'match and remove the following pattern from the end of the string
Now run chmod +x default.sh and run the script with ./default.sh first second .
Observer how the third argument's default has been assigned, but not the first two.
You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with
positional variables in scripts or functions. Try changing the above script to see how it fails.
7) Traps
The trap built-in can be used to 'catch' when a
signal is sent to your script.
Note that there are two 'lines' above, even though you used ; to separate the commands.
TMOUT
You can timeout reads, which can be really handy in some scripts
#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}
... ... ...
10) Associative Arrays
Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even
created a Docker container for a tool to help with this here
).
What I didn't know until I read up on it was that you can have associative arrays in bash.
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
7 Bash history shortcuts you will actually useSave time on the command line with these essential Bash shortcuts.
02 Oct 2019 Ian 205
up 12 comments Image by : Opensource.com x Subscribe now
Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut
once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining
only the well-known !! trick I learned when I first
started using Bash.
This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book,
Learn Bash the hard way ; (you can read a
preview of it to learn more).
When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required,
but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to
get them under your fingers, as the time you save will be significant in the long run.
1. The "last argument" one: !$
If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command
into your line.
Consider this scenario:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
Ach, I put the wrongfile filename in my command. I should have put rightfile instead.
You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:
$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place
and the command will work.
There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last
command is one I use the most.
2. The " n th argument" one: !:2
Ever done anything like this?
$ tar -cvf afolder afolder.tar
tar: failed to open
Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.
The last command's items are zero-indexed and can be substituted in with the number after the !: .
Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.
3. The "all the arguments": !*
Imagine I run a command like:
$ grep '(ping|pong)' afile
The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .
I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments
to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).
$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping
You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).
4. The "last but n " : !-2:$
The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the
original one, which means that the last command is no longer the one I want to reference.
For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile
I can no longer use the !$ shortcut.
In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to
grab the last argument from an older command:
$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place
Again, once you learn it, you may be surprised at how often you need it.
5. The "get me the folder" one: !$:h
This one looks less promising on the face of it, but I use it dozens of times daily.
Imagine I run a command like this:
$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.
The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.
I can do this at a stroke with:
$ cd ! $:h
cd / etc
This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only
the /etc ."
6. The "the current line" one: !#:1
For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning
it. I wish I'd done so a long time ago. I most commonly use it to make backup files:
$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak
but once under the fingers, it can be a very quick alternative to
7. The "search and replace" one: !!:gs
This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.
Say I want to tell the world that my s key does not work and outputs f instead:
$ echo my f key doef not work
my f key doef not work
Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:
$ !! :gs / f / s /
echo my s key does not work
my s key does not work
It doesn't work only on single characters; I can replace words or sentences, too:
$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out
Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?
Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks
to learn, these are my favorites that I frequently put to use.
This article was originally posted on Ian's blog,
Zwischenzugs.com
, and is reused with permission.
Orr, August 25, 2019 at 10:39 pm
BTW you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls'
and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!
Dima August 26, 2019 at 7:40 am
Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.
cbarrick on 03 Oct 2019
Your "current line" example is too contrived. Your example is copying to a backup like this:
$ cp /path/to/some/file !#:1.bak
But a better way to write that is with filename generation:
$ cp /path/to/some/file{,.bak}
That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.
Darryl Martin August 26, 2019 at 4:41 pm
I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing
several down and sticking them on my terminal for reference.
A couple additions I'm sure you know.
I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing
me to modify it if necessary.
The problem with bash's history shorcuts for me is... that I never had the need to learn them.
Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through
history than type !:1 (or having to remeber what it means).
Examples:
Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.
You may already be familiar with 2>&1 , which redirects standard error
to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe
both standard output and standard error into the next stage of the pipeline like this:
if doesnotexist |& grep 'command not found' >/dev/null
then
echo oops
fi
3) $''
This construct allows you to specify specific bytes in scripts without fear of triggering
some kind of encoding problem. Here's a command that will grep through files
looking for UK currency ('£') signs in hexadecimal recursively:
grep -r $'\xc2\xa3' *
You can also use octal:
grep -r $'\302\243' *
4) HISTIGNORE
If you are concerned about security, and ever type in commands that might have sensitive
data in them, then this one may be of use.
This environment variable does not put the commands specified in your history file
if you type them in. The commands are separated by colons:
HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"
You have to specify the whole line, so a glob character may be needed if you want
to exclude commands and their arguments or flags.
5) fc
If readline key bindings
aren't under your fingers, then this one may come in handy.
It calls up the last command you ran, and places it into your preferred editor (specified by
the EDITOR variable). Once edited, it re-runs the command.
6) ((i++))
If you can't be bothered with faffing around with variables in bash with the
$[] construct, you can use the C-style compound command.
So, instead of:
A=1
A=$[$A+1]
echo $A
you can do:
A=1
((A++))
echo $A
which, especially with more complex calculations, might be easier on the eye.
7)
caller
Another builtin bash command, caller gives context about the context of your
shell's
SHLVL is a related shell variable which gives the level of depth of the calling
stack.
This can be used to create stack traces for more complex bash scripts.
Here's a die function, adapted from the bash hackers' wiki that gives a stack
trace up through the calling frames:
#!/bin/bash
die() {
local frame=0
((FRAMELEVEL=SHLVL - frame))
echo -n "${FRAMELEVEL}: "
while caller $frame; do
((frame++));
((FRAMELEVEL=SHLVL - frame))
if [[ ${FRAMELEVEL} -gt -1 ]]
then
echo -n "${FRAMELEVEL}: "
fi
done
echo "$*"
exit 1
}
which outputs:
3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port
This one can be particularly handy if you find yourself on a container running within a
Kubernetes cluster service
mesh without any network tools (a frustratingly common experience).
Bash provides you with some virtual files which, when referenced, can create socket
connections to other servers.
This snippet, for example, makes a web request to a site and returns the output.
The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port
80 for reading and writing. Line two sends the raw HTTP request to that socket
connection's file descriptor. The final line retrieves the response.
Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much
everyone is running on https, but when running from application containers within a service
mesh can still prove invaluable, as requests there are initiated using HTTP.
9)
Co-processes
Since version 4 of bash it has offered the capability to run named
coprocesses.
It seems to be particularly well-suited to managing the inputs and outputs to other
processes in a fine-grained way. Here's an annotated and trivial example:
coproc testproc (
i=1
while true
do
echo "iteration:${i}"
((i++))
read -r aline
echo "${aline}"
done
)
This sets up the coprocess as a subshell with the name testproc .
Within the subshell, there's a never-ending while loop that counts its own iterations with
the i variable. It outputs two lines: the iteration number, and a line read in
from standard input.
After creating the coprocess, bash sets up an array with that name with the file descriptor
numbers for the standard input and standard output. So this:
echo "${testproc[@]}"
in my terminal outputs:
63 60
Bash also sets up a variable with the process identifier for the coprocess, which you can
see by echoing it:
echo "${testproc_PID}"
You can now input data to the standard input of this coprocess at will like this:
echo input1 >&"${testproc[1]}"
In this case, the command resolves to: echo input1 >&60 , and the
>&[INTEGER] construct ensures the redirection goes to the coprocess's
standard input.
Now you can read the output of the coprocess's two lines in a similar way, like this:
You might use this to create an expect -like script if you were so inclined, but it
could be generally useful if you want to manage inputs and outputs. Named pipes are another
way to achieve a similar result.
Here's a complete listing for those who want to cut and paste:
Most shells offer the ability to create, manipulate, and query indexed arrays. In plain
English, an indexed array is a list of things prefixed with a number. This list of things,
along with their assigned number, is conveniently wrapped up in a single variable, which makes
it easy to "carry" it around in your code.
Bash, however, includes the ability to create associative arrays and treats these arrays the
same as any other array. An associative array lets you create lists of key and value pairs,
instead of just numbered values.
The nice thing about associative arrays is that keys can be arbitrary:
$ declare -A
userdata
$ userdata [ name ] =seth
$ userdata [ pass ] =8eab07eb620533b083f241ec4e6b9724
$ userdata [ login ] = ` date --utc + % s `
Category : Tools(Practitioner's
Reflections on The DevOps Handbook)The Holy Wars of DevOps
Yet another argument explodes online around the 'true nature of DevOps', around 'what DevOps
really means' or around 'what DevOps is not'. At each conference I attend we talk about DevOps
culture , DevOps mindset and DevOps ways . All confirming one single truth – DevOps is a
myth .
Now don't get me wrong – in no way is this a negation of its validity or importance.
As Y.N.Harrari shows so eloquently in his book 'Sapiens' –
myths were the forming power in the development of humankind. It is in fact our ability to
collectively believe in these non-objective, imagined realities that allows us to collaborate
at large scale, to coordinate our actions, to build pyramids, temples, cities and
roads.
There's a Handbook!
I am writing this while finishing the exceptionally well written "DevOps Handbook" . If you really
want to know what stands behind the all-too-often misinterpreted buzzword – you better
read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in
DevOps. And it comes from the folks who invented the term and have been busy developing its
main concepts over the last 7 years.
Now notice – I'm only saying you should read the "DevOps Handbook" if you want to
understand what DevOps is about. After finishing it I'm pretty sure you won't have any interest
in participating in petty arguments along the lines of 'is DevOps about automation or not?'.
But I'm not saying you should read the handbook if you want to know how to improve and speed up
your software manufacturing and delivery processes. And neither if you want to optimize your IT
organization for innovation and continuous improvement.
Because the main realization that you, as a smart reader, will arrive at – is just
that there is no such thing as DevOps. DevOps is a myth .
So What's The Story?
It all basically comes down to this: some IT companies achieve better results than others .
Better revenues, higher customer and employee satisfaction, faster value delivery, higher
quality. There's no one-size-fits-all formula, there is no magic bullet – but we can
learn from these high performers and try to apply certain tools and practices in order to
improve the way we work and achieve similar or better results. These tools and processes come
from a myriad of management theories and practices. Moreover – they are constantly
evolving, so we need to always be learning. But at least we have the promise of better life.
That is if we get it all right: the people, the architecture, the processes, the mindset, the
org structure, etc.
So it's not about certain tools, cause the tools will change. And it's not about certain
practices – because we're creative and frameworks come and go. I don't see too many folks
using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts
today) And then the speakers at the next fancy conference will tell you it's mainly about
culture. And you know what culture is? It's just a story, or rather a collection of stories
that a group of people share. Stories that tell us something about the world and about
ourselves. Stories that have only a very relative connection to the material world. Stories
that can easily be proven as myths by another group of folks who believe them to be
wrong.
But Isn't It True?
Anybody who's studied management
theories knows how the approaches have changed since the beginning of the last century.
From Taylor's scientific management and down to McGregor's X&Y theory they've all had their
followers. Managers who've applied them and swore getting great results thanks to them. And yet
most of these theories have been proven wrong by their successors.
In the same way we see this happening with DevOps and Agile. Agile was all the buzz since
its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile
didn't deliver on its promise of better life. Or rather – it became so commonplace that
it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that
maybe this new DevOps thing will make us happy.
You may say that the world is changing fast – that's why we now need new approaches!
And I agree – the technology, the globalization, the flow of information – they all
change the stories we live in. But this also means that whatever is working for someone else
today won't probably work for you tomorrow – because the world will change yet again.
Which means that the DevOps Handbook – while a great overview and historical document
and a source of inspiration – should not be taken as a guide to action. It's just another
step towards establishing the DevOps myth.
And that takes us back to where we started – myths and stories aren't bad in
themselves. They help us collaborate by providing a common semantic system and shared goals.
But they only work while we believe in them and until a new myth comes around – one
powerful enough to grab our attention.
Your Own DevOps Story
So if we agree that DevOps is just another myth, what are we left with? What do we at
Otomato and other DevOps consultants and
vendors have to sell? Well, it's the same thing we've been building even before the DevOps
buzz: effective software delivery and IT management. Based on tools and processes, automation
and effective communication. Relying on common sense and on being experts in whatever myth is
currently believed to be true.
As I keep saying – culture is a story you tell. And we make sure to be experts in both
the storytelling and the actual tooling and architecture. If you're currently looking at
creating a DevOps transformation or simply want to optimize your software delivery – give
us a call. We'll help to build your authentic DevOps story, to train your staff and to
architect your pipeline based on practice, skills and your organization's actual needs. Not
based on myths that other people tell.
Source is
like a Python import or a Java include. Learn it to expand your Bash prowess.Seth Kenlon (Red Hat) Feed 25
up 2 comments Image by : Opensource.com x Subscribe now
When you log into a Linux shell, you inherit a specific working environment. An
environment , in the context of a shell, means that there are certain variables already
set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines
where your shell looks for commands. Without it, nearly everything you try to do in Bash would
fail with a command not found error. Your environment, while mostly invisible to you as you go
about your everyday tasks, is vitally important.
There are many ways to affect your shell environment. You can make modifications in
configuration files, such as ~/.bashrc and ~/.profile , you can run
services at startup, and you can create your own custom commands or script your own Bash functions
.
Add to your environment with source
Bash (along with some other shells) has a built-in command called source . And
here's where it can get confusing: source performs the same function as the
command . (yes, that's but a single dot), and it's not the same
source as the Tcl command (which may come up on your screen if you
type man source ). The built-in source command isn't in your
PATH at all, in fact. It's a command that comes included as a part of Bash, and to
get further information about it, you can type help source .
The . command is POSIX
-compliant. The source command is not defined by POSIX but is interchangeable with
the . command.
According to Bash help , the source command executes a file in
your current shell. The clause "in your current shell" is significant, because it means it
doesn't launch a sub-shell; therefore, whatever you execute with source happens
within and affects your current environment.
Before exploring how source can affect your environment, try
source on a test file to ensure that it executes code as expected. First, create a
simple Bash script and save it as a file called hello.sh :
#!/usr/bin/env
bash
echo "hello world"
Using source , you can run this script even without setting the executable
bit:
$ source hello.sh
hello world
You can also use the built-in . command for the same results:
$ .
hello.sh
hello world
The source and . commands successfully execute the contents of the
test file.
Set variables and import functions
You can use source to "import" a file into your shell environment, just as you
might use the include keyword in C or C++ to reference a library or the
import keyword in Python to bring in a module. This is one of the most common uses
for source , and it's a common default inclusion in .bashrc files to
source a file called .bash_aliases so that any custom aliases you
define get imported into your environment when you log in.
Here's an example of importing a Bash function. First, create a function in a file called
myfunctions . This prints your public IP address and your local IP
address:
When you use source in Bash, it searches your current directory for the file
you reference. This doesn't happen in all shells, so check your documentation if you're not
using Bash.
If Bash can't find the file to execute, it searches your PATH instead. Again,
this isn't the default for all shells, so check your documentation if you're not using
Bash.
These are both nice convenience features in Bash. This behavior is surprisingly powerful
because it allows you to store common functions in a centralized location on your drive and
then treat your environment like an integrated development environment (IDE). You don't have to
worry about where your functions are stored, because you know they're in your local equivalent
of /usr/include , so no matter where you are when you source them, Bash finds
them.
For instance, you could create a directory called ~/.local/include as a storage
area for common functions and then put this block of code into your .bashrc
file:
for i in $HOME / .local / include /* ; do
source $i
done
This "imports" any file containing custom functions in ~/.local/include into
your shell environment.
Bash is the only shell that searches both the current directory and your PATH
when you use either the source or the . command.
Using source
for open source
Using source or . to execute files can be a convenient way to
affect your environment while keeping your alterations modular. The next time you're thinking
of copying and pasting big blocks of code into your .bashrc file, consider placing
related functions or groups of aliases into dedicated files, and then use source
to ingest them.
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
curl transfers a URL. Use this command to test an application's endpoint or
connectivity to an upstream service endpoint. c url can be useful for determining if
your application can reach another service, such as a database, or checking if your service is
healthy.
As an example, imagine your application throws an HTTP 500 error indicating it can't reach a
MongoDB database:
The -I option shows the header information and the -s option silences the
response body. Checking the endpoint of your database from your local desktop:
$ curl -I -s
database: 27017
HTTP / 1.0 200 OK
So what could be the problem? Check if your application can get to other places besides the
database from the application host:
This indicates that your application cannot resolve the database because the URL of the
database is unavailable or the host (container or VM) does not have a nameserver it can use to
resolve the hostname.
Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status
with my previous command. That command was saved in history , so I can reference it. I simply run:
$> !!:s/status/start/
sudo systemctl start sshd
The above expression has the following content:
!! - repeat the last command from history
:s/status/start/ - substitute status with start
The result is that the sshd service is started.
Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:
What if I want to display the last three commands in my history? I enter:
$> history 3
1002 ls
1003 tail audit.log
1004 history 3
I run tail on audit.log by referring to the history line number. In this case, I use line 1003:
$> !1003
tail audit.log
Reference the last argument of the previous command
When I want to list directory contents for different directories, I may change between directories quite often. There is a
nice trick you can use to refer to the last argument of the previous command. For example:
$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file
In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.
If I want to cd (change directory) to that location, I enter something like this:
$> cd $_
$> pwd
/home/username/some/very/long/path/to/some/directory
Now simply use a dash character to go back to where I was:
If you're looking for an interactive web portal to learn shell scripting and also try it online, Learn Shell is a great place to
start.
It covers the basics and offers some advanced exercises as well. The content is usually brief and to the point hence, I'd
recommend you to check this out.
Shell scripting tutorial is web resource that's completely dedicated for shell scripting. You can choose to read the resource for
free or can opt to purchase the PDF, book, or the e-book to support it.
Of course, paying for the paperback edition or the e-book is optional. But, the resource should come in handy for free.
Udemy
is unquestionably one of the most popular platforms for online courses. And, in addition to the paid certified courses, it
also offers some free stuff that does not include certifications.
Shell Scripting is one of the most recommended free course available on Udemy for free. You can enroll in it without spending
anything.
Yet another interesting free course focused on bash shell scripting on Udemy. Compared to the previous one, this resource seems
to be more popular. So, you can enroll in it and see what it has to offer.
Not to forget that the free Udemy course does not offer any certifications. But, it's indeed an impressive free shell scripting
learning resource.
As the name suggests, the bash academy is completely focused on educating the users about bash shell.
It's suitable for both beginners and experienced users even though it does not offer a lot of content. Not just limited to the
guide -- but it also used to offer an interactive game to practice which no longer works.
Hence, if this is interesting enough, you can also check out its
GitHub page
and fork it to improve the existing resources if you want.
LinkedIn offers a number of free courses to help you improve your skills and get ready for more job opportunities. You will also
find a couple of courses focused on shell scripting to brush up some basic skills or gain some advanced knowledge in the process.
Here, I've linked a course for bash scripting, you can find some other similar courses for free as well.
An impressive advanced bash scripting guide available in the form of PDF for free. This PDF resource does not enforce any
copyrights and is completely free in the public domain.
Even though the resource is focused on providing advanced insights. It's also suitable for beginners to refer this resource and
start to learn shell scripting.
Tutorialspoint is a quite popular web portal to learn a variety of
programming languages
. I would say this is quite good for starters to learn the fundamentals and the basics.
This may not be suitable as a detailed resource -- but it should be a useful one for free.
Tutorialspoint
10. City College of San Francisco Online Notes [Web portal]
<img data-attachment-id="80382" data-permalink="https://itsfoss.com/shell-scripting-resources/scripting-notes-ccsf/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&ssl=1" data-orig-size="800,291" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="scripting-notes-ccsf" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=300%2C109&ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?ssl=1" alt="Scripting Notes Ccsf" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?w=800&ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=300%2C109&ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=768%2C279&ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />
This may not be the best free resource there is -- but if you're ready to explore every type of resource to learn shell scripting,
why not refer to the online notes of City College of San Francisco?
I came across this with a random search on the Internet about shell scripting resources.
Again, it's important to note that the online notes could be a bit dated. But, it should be an interesting resource to explore.
City College of San Francisco Notes
Honorable mention: Linux Man Page
<img data-attachment-id="80383" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-linux-man-page/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&ssl=1" data-orig-size="800,437" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":"","orientation":"0"}" data-image-title="bash-linux-man-page" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=300%2C164&ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?ssl=1" alt="Bash Linux Man Page" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?w=800&ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=300%2C164&ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=768%2C420&ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />
Not to forget, the man page for bash should also be a fantastic free resource to explore more about the commands and how it
works.
Even if it's not tailored as something that lets you master shell scripting, it is still an important web resource that you can
use for free. You can either choose to visit the man page online or just head to the terminal and type the following command to get
help:
man bash
Wrapping Up
There are also a lot of popular paid resources just like some of the
best Linux books
available out there. It's easy to start learning about shell scripting using some free resources available
across the web.
In addition to the ones I've mentioned, I'm sure there must be numerous other resources available online to help you learn shell
scripting.
Do you like the resources mentioned above? Also, if you're aware of a fantastic free resource that I possibly missed, feel free
to tell me about it in the comments below.
<img alt='' src='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=90&d=mm&r=g' srcset='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=180&d=mm&r=g 2x' class='avatar avatar-90 photo' height='90' width='90' />
About
Ankush Das
A passionate technophile who also happens to be a Computer Science graduate. You will usually see cats dancing to the beautiful
tunes sung by him.
comment_count comments
Newest
Newest
Oldest
Most Liked
Comment as a guest:
"... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
"... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
How do I show
line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it
is the number one editor for Linux sysadmin and software development work.
By default, Vim
doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the
following instructions.
.... Let us see how to display the line number in vim
permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor
for Linux sysadmin and software development work.
By default, Vim doesn't show line numbers on
Linux and Unix-like systems, however, we can turn it on using the following instructions. My
experience shows that line numbers are useful for debugging shell scripts, program code, and
configuration files. Let us see how to display the line number in vim permanently.
Vim show line numbers by default
Turn on absolute line numbering by default in vim:
Open vim configuration file ~/.vimrc by typing the following command: vim ~/.vimrc
Append set number
Press the Esc key
To save the config file, type :w and hit Enter key
You can temporarily disable the absolute line numbers within vim session, type: :set nonumber
Want to enable disabled the absolute line numbers within vim session? Try: :set number
We can see vim line numbers on the left side.
Relative line numbers
Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too
to help navigate around text files. The 'relativenumber' vim option displays the line number
relative to the line with the cursor in front of each line. Relative line numbers help you use
the count you can precede some vertical motion commands with, without having to calculate it
yourself. Once again edit the ~/vimrc, run: vim ~/vimrc
Finally, turn relative line numbers on: set relativenumber Save and close the file
in vim text editor.
How to show "Hybrid" line numbers in Vim by default
What happens when you put the following two config directives in ~/.vimrc ? set number
set relativenumber
That is right. We can enable both absolute and relative line numbers at the same time to get
"Hybrid" line numbers.
Conclusion
Today we learned about permanent line number settings for the vim text editor. By adding the
"set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show
line numbers each time vim started. See vim docs here for more info and following tutorials too:
In part one, How to setup Linux chroot jails,
I covered the chroot command and you learned to use the chroot wrapper in sshd to isolate the sftpusers
group. When you edit sshd_config to invoke the chroot wrapper and give it matching characteristics, sshd
executes certain commands within the chroot jail or wrapper. You saw how this technique could potentially be useful to implement
contained, rather than secure, access for remote users.
Expanded example
I'll start by expanding on what I did before, partly as a review. Start by setting up a custom directory for remote users. I'll
use the sftpusers group again.
Start by creating the custom directory that you want to use, and setting the ownership:
This time, make root the owner, rather than the sftpusers group. This way, when you add users, they don't start out
with permission to see the whole directory.
Next, create the user you want to restrict (you need to do this for each user in this case), add the new user to the sftpusers
group, and deny a login shell because these are sftp users:
Match Group sftpusers
ChrootDirectory /sftpusers/chroot/
ForceCommand internal-sftp
X11Forwarding no
AllowTCPForwarding no
Note that you're back to specifying a directory, but this time, you have already set the ownership to prevent sanjay
from seeing anyone else's stuff. That trailing / is also important.
Then, restart sshd and test:
[skipworthy@milo ~]$ sftp sanjay@showme
sanjay@showme's password:
Connected to sanjay@showme.
sftp> ls
sanjay
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> ls
sanjay
sftp> touch test
Invalid command.
So. Sanjay can only see his own folder and needs to cd into it to do anything useful.
Isolating a service or specific user
Now, what if you want to provide a usable shell environment for a remote user, or create a chroot jail environment for a specific
service? To do this, create the jailed directory and the root filesystem, and then create links to the tools and libraries that you
need. Doing all of this is a bit involved, but Red Hat provides a script and basic instructions that make the process easier.
Note: I've tested the following in Red Hat Enterprise Linux 7 and 8, though my understanding is that this capability was available
in Red Hat Enterprise Linux 6. I have no reason to think that this script would not work in Fedora, CentOS or any other Red Hat distro,
but your mileage (as always) may vary.
First, make your chroot directory:
# mkdir /chroot
Then run the script from yum that installs the necessary bits:
# yum --releasever=/ --installroot=/chroot install iputils vim python
The --releasever=/ flag passes the current local release info to initialize a repo in the new --installroot
, defines where the new install location is. In theory, you could make a chroot jail that was based on any version of the
yum or dnf repos (the script will, however, still start with the current system repos).
With this tool, you install basic networking utilities like the VIM editor and Python. You could add other things initially if
you want to, including whatever service you want to run inside this jail. This is also one of the cool things about yum
and dependencies. As part of the dependency resolution, yum makes the necessary additions to the filesystem tree
along with the libraries. It does, however, leave out a couple of things that you need to add next. I'll will get to that in a moment.
By now, the packages and the dependencies have been installed, and a new GPG key was created for this new repository in relation
to this new root filesystem. Next, mount your ephemeral filesystems:
# mount -t proc proc /chroot/proc/
# mount -t sysfs sys /chroot/sys/
And set up your dev bindings:
# mount -o bind /dev/pts /chroot/dev/pts
# mount -o bind /dev/pts /chroot/dev/pts
Note that these mounts will not survive a reboot this way, but this setup will let you test and play with a chroot jail
environment.
Now, test to check that everything is working as you expect:
# chroot /chroot
bash-4.2# ls
bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr
You can see that the filesystem and libraries were successfully added:
bash-4.2# pwd
/
bash-4.2# cd ..
From here, you see the correct root and can't navigate up:
bash-4.2# exit
exit
#
Now you've exited the chroot wrapper, which is expected because you entered it from a local login shell as root. Normally, a remote
user should not be able to do this, as you saw in the sftp example:
Note that these directories were all created by root, so that's who owns them. Now, add this chroot to the sshd_config
, because this time you will match just this user:
Match User leo
ChrootDirectory /chroot
Then, restart sshd .
You also need to copy the /etc/passwd and /etc/group files from the host system to the /chroot
directory:
Note: If you skip the step above, you can log in, but the result will be unreliable and you'll be prone to errors related to conflicting
logins
Now for the test:
[skipworthy@milo ~]$ ssh leo@showme
leo@showme's password:
Last login: Thu Jan 30 19:35:36 2020 from 192.168.0.20
-bash-4.2$ ls
-bash-4.2$ pwd
/home/leo
It looks good. Now, can you find something useful to do? Let's have some fun:
You could drop the releasever=/ , but I like to leave that in because it leaves fewer chances for unexpected
results.
[root@showme1 ~]# chroot /chroot
bash-4.2# ls /etc/httpd
conf conf.d conf.modules.d logs modules run
bash-4.2# python
Python 2.7.5 (default, Aug 7 2019, 00:51:29)
So, httpd is there if you want it, but just to demonstrate you can use a quick one-liner from Python, which you also
installed:
bash-4.2# python -m SimpleHTTPServer 8000
Serving HTTP on 0.0.0.0 port 8000 ...
And now you have a simple webserver running in a chroot jail. In theory, you can run any number of services from inside the chroot
jail and keep them 'contained' and away from other services, allowing you to expose only a part of a larger resource environment
without compromising your user's experience.
New to Linux containers? Download the
Containers Primer and
learn the basics.
Linux
Certifications - RHCSA / RHCE
Certification | Ansible Automation
Certification | LFCS / LFCE
Certification In the past few years, the Linux community has been blessed with some
remarkable advancements in the area of package management on Linux systems ,
especially when it comes to universal or cross-distribution software packaging and
distribution. One of such advancements is the Snap package format developed by Canonical , the
makers of the popular Ubuntu Linux . What are Snap Packages?
Snaps are cross-distribution, dependency-free, and easy to install applications packaged
with all their dependencies to run on all major Linux distributions. From a single build, a
snap (application) will run on all supported Linux distributions on desktop, in the cloud, and
IoT. Supported distributions include Ubuntu, Debian, Fedora, Arch Linux, Manjaro, and
CentOS/RHEL.
Snaps are secure – they are confined and sandboxed so that they do not compromise the
entire system. They run under different confinement levels (which is the degree of isolation
from the base system and each other). More notably, every snap has an interface carefully
selected by the snap's creator, based on the snap's requirements, to provide access to specific
system resources outside of their confinement such as network access, desktop access, and
more.
Another important concept in the snap ecosystem is Channels . A channel determines which
release of a snap is installed and tracked for updates and it consists of and is subdivided by,
tracks, risk-levels, and branches.
The main components of the snap package management system are:
snapd – the background service that manages and maintains your snaps on a Linux
system.
snap – both the application package format and the command-line interface tool used
to install and remove snaps and do many other things in the snap ecosystem.
snapcraft – the framework and powerful command-line tool for building snaps.
snap store – a place where developers can share their snaps and Linux users search
and install them.
Besides, snaps also update automatically. You can configure when and how updates occur. By
default, the snapd daemon checks for updates up to four times a day: each update check is
called a refresh . You can also manually initiate a refresh.
How to Install Snapd in
Linux
As described above, the snapd daemon is the background service that manages and maintains
your snap environment on a Linux system, by implementing the confinement policies and
controlling the interfaces that allow snaps to access specific system resources. It also
provides the snap command and serves many other purposes.
To install the snapd package on your system, run the appropriate command for your Linux
distribution.
After installing snapd on your system, enable the systemd unit that manages the main snap
communication socket, using the systemctl
commands as follows.
On Ubuntu and its derivatives, this should be triggered automatically by the package
installer.
$ sudo systemctl enable --now snapd.socket
Note that you can't run the snap command if the snapd.socket is not running. Run the
following commands to check if it is active and is enabled to automatically start at system
boot.
Check Snapd and Snap Version How to Install Snaps in Linux
The snap command allows you to install, configure, refresh and remove snaps, and interact
with the larger snap ecosystem.
Before installing a snap , you can check if it exists in the snap store. For example, if the
application belongs in the category of " chat servers " or " media players ", you can run these
commands to search for it, which will query the store for available packages in the stable
channel.
To show detailed information about a snap , for example, rocketchat-server ,
you can specify its name or path. Note that names are looked for both in the snap store and in
the installed snaps.
To install a snap on your system, for example, rocketchat-server , run the following
command. If no options are provided, a snap is installed tracking the " stable " channel, with
strict security confinement.
You can opt to install from a different channel: edge , beta , or candidate , for one reason
or the other, using the --edge , --beta , or --candidate
options respectively. Or use the --channel option and specify the channel you wish
to install from.
List All Installation Versions of Snap Updating and Reverting Snaps
You can update a specified snap, or all snaps in the system if none are specified as
follows. The refresh command checks the channel being tracked by the snap and it downloads and
installs a newer version of the snap if it is available.
$ sudo snap refresh mailspring
OR
$ sudo snap refresh #update all snaps on the local system
After updating an app to a new version, you can revert to a previously used version using
the revert command. Note that the data associated with the software will also be reverted.
Check Revision of Snap Disabling/Enabling and Removing Snaps
You can disable a snap if you do not want to use it. When disabled, a snap's binaries and
services will no longer be available, however, all the data will still be there.
$ sudo snap disable mailspring
If you need to use the snap again, you can enable it back.
$ sudo snap enable mailspring
To completely remove a snap from your system, use the remove command. By default, all of a
snap's revisions are removed.
$ sudo snap remove mailspring
To remove a specific revision, use the --revision option as follows.
$ sudo snap remove --revision=482 mailspring
It is key to note that when you remove a snap , its data (such as internal user, system, and
configuration data) is saved by snapd (version 2.39 and higher) as a snapshot, and stored on
the system for 31 days. In case you reinstall the snap within the 31 days, you can restore the
data.
Conclusion
Snaps are becoming more popular within the Linux community as they provide an easy way to
install software on any Linux distribution. In this guide, we have shown how to install and
work with snaps in Linux. We covered how to install snapd , install snaps , view installed
snaps, update and revert snaps, and disable/enable and remove snaps.
You can ask questions or reach us via the feedback form below. In the next part of this
guide, we will cover managing snaps (commands, aliases, services, and snapshots) in
Linux.
It recursively watches watch one or more directory trees.
Each watched directory is called a root.
It can be configured via the command-line or a configuration file written in JSON
format.
It records changes to log files.
Supports subscription to file changes that occur in a root.
Allows you to query a root for file changes since you last checked, or the current state
of the tree.
It can watch an entire project.
In this article, we will explain how to install and use watchman to watch (monitor) files
and record when they change in Linux. We will also briefly demonstrate how to watch a directory
and invoke a script when it changes.
Installing Watchman File Watching Service in
Linux
We will install watchman service from sources, so first install these required dependencies
libssl-dev , autoconf , automake libtool , setuptools , python-devel and libfolly using
following command on your Linux distribution.
Once required dependencies installed, you can start building watchman by downloading its
github repository, move into the local repository, configure, build and install it using
following commands.
$ git clone https://github.com/facebook/watchman.git
$ cd watchman
$ git checkout v4.9.0
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
Watching Files and Directories with Watchman in Linux
Watchman can be configured in two ways: (1) via the command-line while the daemon is running
in background or (2) via a configuration file written in JSON format.
To watch a directory (e.g ~/bin ) for changes, run the following command.
The following command writes a configuration file called state under
/usr/local/var/run/watchman/<username>-state/ , in JSON format as well as a log file
called log in the same location.
You can view the two files using the cat command as show.
You can also define what action to trigger when a directory being watched for changes. For
example in the following command, ' test-trigger ' is the name of the trigger and
~bin/pav.sh is the script that will be invoked when changes are detected in the
directory being monitored.
For test purposes, the pav.sh script simply creates a file with a timestamp
(i.e file.$time.txt ) within the same directory where the script is stored.
Based on the above configuration, each time the ~/bin directory changes, a file
such as file.2019-03-13.23:14:17.txt is created inside it and you can view them
using ls command
.
Watchman is an open source file watching service that watches files and records, or triggers
actions, when they change. Use the feedback form below to ask questions or share your thoughts
with us.
Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.
To create a new temporary file, simply run:
$ mktemp
You will see an output like below:
/tmp/tmp.U0C3cgGFpk
As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just
an empty file.
You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:
$ mktemp --suffix ".txt"
/tmp/tmp.sux7uKNgIA.txt
How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.
$ mktemp -d
This will create a random empty directory in /tmp folder.
Sample output:
/tmp/tmp.PE7tDnm4uN
All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting
file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory
will have read, write and executable permissions for the current user, but no permissions for groups or others.
You can verify the file permissions using "ls" command:
$ ls -al /tmp/tmp.U0C3cgGFpk
-rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk
Verify the directory permissions using "ls" command:
$ ls -ld /tmp/tmp.PE7tDnm4uN
drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN
Create temporary files or directories with custom names using mktemp command
As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory
with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.
$ mktemp ostechnixXXX
ostechnixq70
Similarly, to create directory, just run:
$ mktemp -d ostechnixXXX
ostechnixcBO
Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp
location . In this case, you need to manually clean up them.
Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your
choice.
For instance, I want to add "blog" at the end of the filename. Hence, my command would be:
Now we do have the suffix "blog" at the end of the filename.
If you don't want to create any file or directory, you can simply perform a dry run like below.
$ mktemp -u
/tmp/tmp.oK4N4U6rDG
For help, run:
$ mktemp --help
Why do we actually need mktemp?
You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command
is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since
mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories
with permission 700(rwx), so the other users can't access it. For more details, check man pages.
In part 1 of
this article, I introduced you to Unbound , a great name resolution option for
home labs and small network environments. We looked at what Unbound is, and we discussed how to
install it. In this section, we'll work on the basic configuration of Unbound.
Basic configuration
First find and uncomment these two entries in unbound.conf :
interface: 0.0.0.0
interface: ::0
Here, the 0 entry indicates that we'll be accepting DNS queries on all
interfaces. If you have more than one interface in your server and need to manage where DNS is
available, you would put the address of the interface here.
Next, we may want to control who is allowed to use our DNS server. We're going to limit
access to the local subnets we're using. It's a good basic practice to be specific when we
can:
Access-control: 127.0.0.0/8 allow # (allow queries from the local host)
access-control: 192.168.0.0/24 allow
access-control: 192.168.1.0/24 allow
We also want to add an exception for local, unsecured domains that aren't using DNSSEC
validation:
domain-insecure: "forest.local"
Now I'm going to add my local authoritative BIND server as a stub-zone:
If you want or need to use your Unbound server as an authoritative server, you can add a set
of local-zone entries that look like this:
local-zone: "forest.local." static
local-data: "jupiter.forest" IN A 192.168.0.200
local-data: "callisto.forest" IN A 192.168.0.222
These can be any type of record you need locally but note again that since these are all in
the main configuration file, you might want to configure them as stub zones if you need
authoritative records for more than a few hosts (see above).
If you were going to use this Unbound server as an authoritative DNS server, you would also
want to make sure you have a root hints file, which is the zone file for the root
DNS servers.
Get the file from InterNIC . It is
easiest to download it directly where you want it. My preference is usually to go ahead and put
it where the other unbound related files are in /etc/unbound :
Then add an entry to your unbound.conf file to let Unbound know where the hints
file goes:
# file to read root hints from.
root-hints: "/etc/unbound/root.hints"
Finally, we want to add at least one entry that tells Unbound where to forward requests to
for recursion. Note that we could forward specific domains to specific DNS servers. In this
example, I'm just going to forward everything out to a couple of DNS servers on the
Internet:
Now, as a sanity check, we want to run the unbound-checkconf command, which
checks the syntax of our configuration file. We then resolve any errors we find.
[root@callisto ~]# unbound-checkconf
/etc/unbound/unbound_server.key: No such file or directory
[1584658345] unbound-checkconf[7553:0] fatal error: server-key-file: "/etc/unbound/unbound_server.key" does not exist
This error indicates that a key file which is generated at startup does not exist yet, so
let's start Unbound and see what happens:
[root@callisto ~]# systemctl start unbound
With no fatal errors found, we can go ahead and make it start by default at server
startup:
[root@callisto ~]# systemctl enable unbound
Created symlink from /etc/systemd/system/multi-user.target.wants/unbound.service to /usr/lib/systemd/system/unbound.service.
And you should be all set. Next, let's apply some of our DNS troubleshooting skills to see
if it's working correctly.
First, we need to set our DNS resolver to use the new server:
[root@showme1 ~]# nmcli con mod ext ipv4.dns 192.168.0.222
[root@showme1 ~]# systemctl restart NetworkManager
[root@showme1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.0.222
[root@showme1 ~]#
Excellent! We are getting a response from the new server, and it's recursing us to the root
domains. We don't see any errors so far. Now to check on a local host:
;; ANSWER SECTION:
jupiter.forest. 5190 IN A 192.168.0.200
Great! We are getting the A record from the authoritative server back, and the IP address is
correct. What about external domains?
;; ANSWER SECTION:
redhat.com. 3600 IN A 209.132.183.105
Perfect! If we rerun it, will we get it from the cache?
;; ANSWER SECTION:
redhat.com. 3531 IN A 209.132.183.105
;; Query time: 0 msec
;; SERVER: 192.168.0.222#53(192.168.0.222)
Note the Query time of 0 seconds- this indicates that the answer lives on the
caching server, so it wasn't necessary to go ask elsewhere. This is the main benefit of a local
caching server, as we discussed earlier.
Wrapping up
While we did not discuss some of the more advanced features that are available in Unbound,
one thing that deserves mention is DNSSEC. DNSSEC is becoming a standard for DNS servers, as it
provides an additional layer of protection for DNS transactions. DNSSEC establishes a trust
relationship that helps prevent things like spoofing and injection attacks. It's worth looking
into a bit if you are using a DNS server that faces the public even though It's beyond the
scope of this article.
Configure Lsyncd to Synchronize Remote Directories
In this section, we will configure Lsyncd to synchronize /etc/ directory on the local system
to the /opt/ directory on the remote system. Advertisements
Before starting, you will need to setup SSH key-based authentication between the local
system and remote server so that the local system can connect to the remote server without
password.
On the local system, run the following command to generate a public and private key:
ssh-keygen -t rsa
You should see the following output:
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:c7fhjjhAamFjlk6OkKPhsphMnTZQFutWbr5FnQKSJjE root@ubuntu20
The key's randomart image is:
+---[RSA 3072]----+
| E .. |
| ooo |
| oo= + |
|=.+ % o . . |
|[email protected] oSo. o |
|ooo=B o .o o o |
|=o.... o o |
|+. o .. o |
| . ... . |
+----[SHA256]-----+
The above command will generate a private and public key inside ~/.ssh directory.
Next, you will need to copy the public key to the remote server. You can copy it with the
following command: Advertisements
ssh-copy-id root@remote-server-ip
You will be asked to provide the password of the remote root user as shown below:
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
Once the user is authenticated, the public key will be appended to the remote user
authorized_keys file and connection will be closed.
Now, you should be able log in to the remote server without entering password.
To test it just try to login to your remote server via SSH:
ssh root@remote-server-ip
If everything went well, you will be logged in immediately.
Next, you will need to edit the Lsyncd configuration file and define the rsyncssh and target
host variables:
In the above guide, we learned how to install and configure Lsyncd for local synchronization
and remote synchronization. You can now use Lsyncd in the production environment for backup
purposes. Feel free to ask me if you have any questions.
Lsyncd uses a filesystem event interface (inotify or fsevents) to watch for changes to local files and directories.
Lsyncd collates these events for several seconds and then spawns one or more processes to synchronize the changes to a
remote filesystem. The default synchronization method is
rsync
. Thus, Lsyncd is a
light-weight live mirror solution. Lsyncd is comparatively easy to install and does not require new filesystems or block
devices. Lysncd does not hamper local filesystem performance.
As an alternative to rsync, Lsyncd can also push changes via rsync+ssh. Rsync+ssh allows for much more efficient
synchronization when a file or direcotry is renamed or moved to a new location in the local tree. (In contrast, plain rsync
performs a move by deleting the old file and then retransmitting the whole file.)
Fine-grained customization can be achieved through the config file. Custom action configs can even be written from
scratch in cascading layers ranging from shell scripts to code written in the
Lua language
.
Thus, simple, powerful and flexible configurations are possible.
Lsyncd 2.2.1 requires rsync >= 3.1 on all source and target machines.
Lsyncd is designed to synchronize a slowly changing local directory tree to a remote mirror. Lsyncd is especially useful
to sync data from a secure area to a not-so-secure area.
Other synchronization tools
DRBD
operates on block device level. This makes it useful for synchronizing systems
that are under heavy load. Lsyncd on the other hand does not require you to change block devices and/or mount points,
allows you to change uid/gid of the transferred files, separates the receiver through the one-way nature of rsync. DRBD is
likely the better option if you are syncing databases.
GlusterFS
and
BindFS
use a FUSE-Filesystem to
interject kernel/userspace filesystem events.
Mirror
is an asynchronous synchronisation tool that takes use of the
inotify notifications much like Lsyncd. The main differences are: it is developed specifically for master-master use, thus
running on a daemon on both systems, uses its own transportation layer instead of rsync and is Java instead of Lsyncd's C
core with Lua scripting.
Lsyncd usage examples
lsyncd -rsync /home remotehost.org::share/
This watches and rsyncs the local directory /home with all sub-directories and transfers them to 'remotehost' using the
rsync-share 'share'.
This will also rsync/watch '/home', but it uses a ssh connection to make moves local on the remotehost instead of
re-transmitting the moved file over the wire.
Disclaimer
Besides the usual disclaimer in the license, we want to specifically emphasize that neither the authors, nor any
organization associated with the authors, can or will be held responsible for data-loss caused by possible malfunctions of
Lsyncd.
The first thing that you want to do anytime that you need to make changes to your disk is to
find out what partitions you already have. Displaying existing partitions allows you to make
informed decisions moving forward and helps you nail down the partition names will need for
future commands. Run the parted command to start parted in
interactive mode and list partitions. It will default to your first listed drive. You will then
use the print command to display disk information.
[root@rhel ~]# parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
(parted)
Creating new partitions with parted
Now that you can see what partitions are active on the system, you are going to add a new
partition to /dev/sdc . You can see in the output above that there is no partition
table for this partition, so add one by using the mklabel command. Then use
mkpart to add the new partition. You are creating a new primary partition using
the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.
(parted) mklabel msdos
(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? ext4
Start? 1
End? 50
(parted)
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 50.3MB 49.3MB primary ext4 lba
Modifying existing partitions with parted
Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then
shrink it back to the original 50 MB. First, note the partition number. You can find this
information by using the print command. You are then going to use the
resizepart command to make the modifications.
(parted) resizepart
Partition number? 1
End? [50.3MB]? 100
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 100MB 99.0MB primary
You can see in the above output that I resized partition number one from 50 MB to 100 MB.
You can then verify the changes with the print command. You can now resize it back
down to 50 MB. Keep in mind that shrinking a partition can cause data loss.
(parted) resizepart
Partition number? 1
End? [100MB]? 50
Warning: Shrinking a partition can cause data loss, are you sure you want to
continue?
Yes/No? yes
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 50.0MB 49.0MB primary
Removing partitions with parted
Now, let's look at how to remove the partition you created at /dev/sdc1 by
using the rm command inside of the parted suite. Again, you will need
the partition number, which is found in the print output.
NOTE: Be sure that you have all of the information correct here, there are no safeguards or
are you sure? questions asked. When you run the rm command, it will
delete the partition number you give it.
(parted) rm 1
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
A few months ago, I read a very interesting article that contained some good information
about a Linux feature that I wanted to learn more about. I won't tell you the name of the
article, what it was about, or even the web site on which I read it, but the article just made
me shudder.
The reason I found this article so cringe-worthy is that it prefaced every command with the
sudo command. The issue I have with this is that the article is allegedly for
sysadmins, and real sysadmins don't use sudo in front of every command they issue.
To do so is a gross misuse of the sudo command. I have written about this type of
misuse in my book, "The Linux Philosophy for SysAdmins." The following is an excerpt from
Chapter 19 of that book.
In this article, we explore why and how the sudo tool is being misused and how
to bypass the configuration that forces one to use sudo instead of working
directly as root.
sudo or not sudo
Part of being a system administrator and using your favorite tools is to use the tools we
have correctly and to have them available without any restrictions. In this case, I find that
the sudo command is used in a manner for which it was never intended. I have a
particular dislike for how the sudo facility is being used in some distributions,
especially because it is employed to limit and restrict access by people doing the work of
system administration to the tools they need to perform their duties.
"[SysAdmins] don't use sudo."
– Paul Venezia
Venezia explains in his InfoWorld article that sudo is used as a crutch for
sysadmins. He does not spend a lot of time defending this position or explaining it. He just
states this as a fact. And I agree with him – for sysadmins. We don't need the training
wheels in order to do our jobs. In fact, they get in the way.
Some distros, such as Ubuntu, use the sudo command in a manner that is intended
to make the use of commands that require elevated (root) privileges a little more difficult. In
these distros, it is not possible to login directly as the root user so the sudo
command is used to allow non-root users temporary access to root privileges. This is supposed
to make the user a little more careful about issuing commands that need elevated privileges
such as adding and deleting users, deleting files that don't belong to them, installing new
software, and generally all of the tasks that are required to administer a modern Linux host.
Forcing sysadmins to use the sudo command as a preface to other commands is
supposed to make working with Linux safer.
Using sudo in the manner it is by these distros is, in my opinion, a horrible
and ineffective attempt to provide novice sysadmins with a false sense of security. It is
completely ineffective at providing any level of protection. I can issue commands that are just
as incorrect or damaging using sudo as I can when not using it. The distros that
use sudo to anesthetize the sense of fear that we might issue an incorrect command
are doing sysadmins a great disservice. There is no limit or restriction imposed by these
distros on the commands that one might use with the sudo facility. There is no
attempt to actually limit the damage that might be done by actually protecting the system from
the users and the possibility that they might do something harmful – nor should there
be.
So let's be clear about this -- these distributions expect the user to perform all of the
tasks of system administration. They lull the users -- who are really System Administrators --
into thinking that they are somehow protected from the effects of doing anything bad because
they must take this restrictive extra step to enter their own password in order to run the
commands.
Bypass sudo
Distributions that work like this usually lock the password for the root user (Ubuntu is one
of these distros). This way no one can login as root and start working unencumbered. Let's look
at how this works and then how to bypass it.
Let me stipulate the setup here so that you can reproduce it if you wish. As an example, I
installed Ubuntu 16.04 LTS1 in a VM using VirtualBox. During the installation, I created a
non-root user, student, with a simple password for this experiment.
Login as the user student and open a terminal session. Let's look at the entry for root in
the /etc/shadow file, which is where the encrypted passwords are stored.
Permission is denied so we cannot look at the /etc/shadow file . This is common
to all distributions so that non-privileged users cannot see and access the encrypted
passwords. That access would make it possible to use common hacking tools to crack those
passwords so it is insecure to allow that.
Now let's try to su – to root.
student@machine1:~$ su -
Password:
su: Authentication failure
This attempt to use the su command to elevate our user to root privilege fails
because the root account has no password and is locked out. Let's use sudo to look
at the /etc/shadow file.
student@machine1:~$ sudo cat /etc/shadow
[sudo] password for student: <enter the user password>
root:!:17595:0:99999:7:::
<snip>
student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
<snip>
I have truncated the results to only show the entry for the root and student users. I have
also shortened the encrypted password so that the entry will fit on a single line. The fields
are separated by colons ( : ) and the second field is the password. Notice that
the password field for root is a "bang," known to the rest of the world as an exclamation point
( ! ). This indicates that the account is locked and that it cannot be used.
Now, all we need to do to use the root account as proper sysadmins is to set up a password
for the root account.
student@machine1:~$ sudo su -
[sudo] password for student: <Enter password for student>
root@machine1:~# passwd root
Enter new UNIX password: <Enter new root password>
Retype new UNIX password: <Re-enter new root password>
passwd: password updated successfully
root@machine1:~#
Now we can login directly on a console as root or su – directly to root
instead of having to use sudo for each command. Of course, we could just use
sudo su – every time we want to login as root – but why bother?
Please do not misunderstand me. Distributions like Ubuntu and their up- and down-stream
relatives are perfectly fine and I have used several of them over the years. When using Ubuntu
and related distros, one of the first things I do is set a root password so that I can login
directly as root.
Valid uses for sudo
The sudo facility does have its uses. The real intent of sudo is
to enable the root user to delegate to one or two non-root users, access to one or two specific
privileged commands that they need on a regular basis. The reasoning behind this is that of the
lazy sysadmin; allowing the users access to a command or two that requires elevated privileges
and that they use constantly, many times per day, saves the SysAdmin a lot of requests from the
users and eliminates the wait time that the users would otherwise experience. But most non-root
users should never have full root access, just to the few commands that they need.
I sometimes need non-root users to run programs that require root privileges. In cases like
this, I set up one or two non-root users and authorize them to run that single command. The
sudo facility also keeps a log of the user ID of each user that uses it. This
might enable me to track down who made an error. That's all it does; it is not a magical
protector.
The sudo facility was never intended to be used as a gateway for commands
issued by a sysadmin. It cannot check the validity of the command. It does not check to see if
the user is doing something stupid. It does not make the system safe from users who have access
to all of the commands on the system even if it is through a gateway that forces them to say
"please" – That was never its intended purpose.
"Unix never says please."
– Rob Pike
This quote about Unix is just as true about Linux as it is about Unix. We sysadmins login as
root when we need to do work as root and we log out of our root sessions when we are done. Some
days we stay logged in as root all day long but we always work as root when we need to. We
never use sudo because it forces us to type more than necessary in order to run
the commands we need to do our jobs. Neither Unix nor Linux asks us if we really want to do
something, that is, it does not say "Please verify that you want to do this."
Yes, I dislike the way some distros use the sudo command. Next time I will
explore some valid use cases for sudo and how to configure it for these cases.
[ Want to test your sysadmin skills? Take a skills assessment
today. ]
I would like to change the default log file name of teraterm terminal log. What I would like
to do automatically create/append log in a file name like "loggedinhost-teraterm.log"
I found following ini setting for log file. It also uses strftime to format
log filename.
; Default Log file name. You can specify strftime format to here.
LogDefaultName=teraterm "%d %b %Y" .log
; Default path to save the log file.
LogDefaultPath=
; Auto start logging with default log file name.
LogAutoStart=on
I have modified it to include date.
Is there any way to prefix hostname in logfile name
I had the same issue, and was able to solve my problem by adding &h like below;
; Default Log file name. You can specify strftime format to here.
LogDefaultName=teraterm &h %d %b %y.log ; Default path to save the log file.
LogDefaultPath=C:\Users\Logs ; Auto start logging with default log file name.
LogAutoStart=on
Specify the editor that is used for display log file
Default log file name(strftime format)
Specify default log file name. It can include a format of strftime.
&h Host name(or empty when not connecting)
&p TCP port number(or empty when not connecting, not TCP connection)
&u Logon user name
%a Abbreviated weekday name
%A Full weekday name
%b Abbreviated month name
%B Full month name
%c Date and time representation appropriate for locale
%d Day of month as decimal number (01 - 31)
%H Hour in 24-hour format (00 - 23)
%I Hour in 12-hour format (01 - 12)
%j Day of year as decimal number (001 - 366)
%m Month as decimal number (01 - 12)
%M Minute as decimal number (00 - 59)
%p Current locale's A.M./P.M. indicator for 12-hour clock
%S Second as decimal number (00 - 59)
%U Week of year as decimal number, with Sunday as first day of week (00 - 53)
%w Weekday as decimal number (0 - 6; Sunday is 0)
%W Week of year as decimal number, with Monday as first day of week (00 - 53)
%x Date representation for current locale
%X Time representation for current locale
%y Year without century, as decimal number (00 - 99)
%Y Year with century, as decimal number
%z, %Z Either the time-zone name or time zone abbreviation, depending on registry settings;
no characters if time zone is unknown
%% Percent sign
# rsync -avz -e ssh [email protected]:/root/2daygeek.tar.gz /root/backup
The authenticity of host 'jump.2daygeek.com (jump.2daygeek.com)' can't be established.
RSA key fingerprint is 6f:ad:07:15:65:bf:54:a6:8c:5f:c4:3b:99:e5:2d:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'jump.2daygeek.com' (RSA) to the list of known hosts.
[email protected]'s password:
receiving file list ... done
2daygeek.tar.gz
sent 42 bytes received 23134545 bytes 1186389.08 bytes/sec
total size is 23126674 speedup is 1.00
You can see the file copied using the
ls command .
# ls -h /root/backup/*.tar.gz
total 125M
-rw------- 1 root root 23M Oct 26 01:00 2daygeek.tar.gz
2) How to Use rsync Command in Reverse Mode with Non-Standard Port
We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse rsync command with the
non-standard port.
# rsync -avz -e "ssh -p 11021" [email protected]:/root/backup/weekly/2daygeek.tar.gz /root/backup
The authenticity of host '[jump.2daygeek.com]:11021 ([jump.2daygeek.com]:11021)' can't be established.
RSA key fingerprint is 9c:ab:c0:5b:3b:44:80:e3:db:69:5b:22:ba:d6:f1:c9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[jump.2daygeek.com]:11021' (RSA) to the list of known hosts.
[email protected]'s password:
receiving incremental file list
2daygeek.tar.gz
sent 30 bytes received 23134526 bytes 1028202.49 bytes/sec
total size is 23126674 speedup is 1.00
3) How to Use scp Command in Reverse Mode on Linux
We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse scp command.
Pscp utility allows you to transfer/copy files to multiple remote Linux
servers using single terminal with one single command, this tool is a part of Pssh (Parallel
SSH Tools), which provides parallel versions of OpenSSH and other similar tools such as:
pscp – is utility for copying files in parallel to a number of hosts.
prsync – is a utility for efficiently copying files to multiple hosts in
parallel.
pnuke – it helps to kills processes on multiple remote hosts in parallel.
pslurp – it helps to copy files from multiple remote hosts to a central host in
parallel.
When working in a network environment where there are multiple hosts on the network, a
System Administrator may find these tools listed above very useful. When working in a network
environment where there are multiple hosts on the network, a System Administrator may find
these tools listed above very useful.
Pscp – Copy Files to Multiple Linux Servers In this article, we shall look at some useful
examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use
the pscp tool, you need to install the PSSH utility on your Linux system, for installation of
PSSH you can read this article. Pscp – Copy Files to Multiple Linux Servers In this
article, we shall look at some useful examples of Pscp utility to transfer/copy files to
multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility
on your Linux system, for installation of PSSH you can read this article. In this article, we
shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux
hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux
system, for installation of PSSH you can read this article. To use the pscp tool, you need to
install the PSSH utility on your Linux system, for installation of PSSH you can read this
article. To use the pscp tool, you need to install the PSSH utility on your Linux system, for
installation of PSSH you can read this article.
Almost all the different options used with these tools are the same except for few that
are related to the specific functionality of a given utility. Almost all the different options
used with these tools are the same except for few that are related to the specific
functionality of a given utility. How to Use Pscp to Transfer/Copy Files to Multiple Linux
Servers While using pscp you need to create a separate file that includes the number of
Linux server IP address and SSH port number that you need to connect to the server. While using
pscp you need to create a separate file that includes the number of Linux server IP address and
SSH port number that you need to connect to the server. Copy Files to Multiple Linux
Servers Let's create a new file called " myscphosts.txt " and add the list of Linux hosts
IP address and SSH port (default 22 ) number as shown. Let's create a new file called "
myscphosts.txt " and add the list of Linux hosts IP address and SSH port (default 22 ) number
as shown.
192.168.0.3:22
192.168.0.9:22
Once you've added hosts to the file, it's time to copy files from local machine to multiple
Linux hosts under /tmp directory with the help of following command. Once you've added hosts to the
file, it's time to copy files from local machine to multiple Linux hosts under /tmp directory with
the help of following command.
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password:
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22
Explanation about the options used in the above command. Explanation about the options used
in the above command.
-h switch used to read a hosts from a given file and location.
-l switch reads a default username on all hosts that do not define a specific user.
-A switch tells pscp ask for a password and send to ssh.
-v switch is used to run pscp in verbose mode.
Copy Directories to Multiple Linux Servers If you want to copy entire directory use
-r option, which will recursively copy entire directories as shown. If you want to copy entire
directory use -r option, which will recursively copy entire directories as shown.
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password:
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22
You can view the manual entry page for the pscp or use pscp --help command to
seek for help.
It didn't work for me as well. I can get into the machine through same ip and port as
I've inserted into hosts.txt file. Still i get the below messages:
Have you placed correct remote SSH host IP address and port number in the
myscphosts.txt file? please confirm and add correct values and then try
again..
I think this approach is way too complex. A simpler and more reliable approach is first to create directory structure and then
as the second statge to copy files.
Use of cp command option --parents -- create the intermediate parent directories if needed to preserve
the parent directory structure. is interesting though
Notable quotes:
"... create the intermediate parent directories if needed to preserve the parent directory structure. ..."
A while ago, we learned how to
copy certain type of files from one directory to another
in Linux. Today we are
going to do the same but preserve the directory structure as well. This brief tutorial explains how to
copy specific file types while keeping the directory structure in Linux. Here I have given two
different ways to do this. Just pick any one that works for you.
Copy Specific File Types While Keeping Directory Structure In Linux
Picture this scenario.
I have a directory named
"Linux"
with different type of files saved in different
sub-directories. Have a look at the following directory structure:
$ tree Linux/
Linux/
├── dir1
│ ├── English
│ │ └── Kina - Can We Kiss Forever.mp3
│ ├── Instrumental
│ │ └── Chill Study Beats.mp3
│ └── Tamil
│ ├── Kannan Vanthu.mp3
│ └── yarenna.mp3
├── dir2
│ ├── file.docx
│ └── Raja Raja Chozan Naan.mp3
├── dir3
│ ├── Bamboo Flute - Meditation - Healing - Sleep - Zen.mp3
│ └── pic.jpg
└── dir4
├── Aaruyirae.mp3
└── video.mp4
7 directories, 10 files
As you see in the above directory structure, the
Linux
directory has four
sub-directories namely
dir1
,
dir2
,
dir3
and
dir4
. The
mp3
files are scattered across all four sub-directories. Now, I
want to copy all
mp3
files to another directory named
"ostechnix"
and also I want to keep the same directory structure in the target directory.
First we will see how to do this using "find" command.
Method 1 -- Copy specific file types while
preserving directory structure using "find" and "cp" or "cpio" commands
Let us break down the above command and see what each option does.
find
-- command to find files and folders in Unix-like systems.
the
dot
(.) -- represents we copy the contents from current directory.
-name '*.mp3'
-- search for files matching with extension .mp3.
-exec cp
-- execute the 'cp' command to copy files from source to destination
directory.
--parents -- create the intermediate parent directories if needed to preserve
the parent directory structure.
\{\}
-- is automatically replaced with the file name of the files found by
'find' command. And the braces are escaped
to protect them from
expansion by the shell in some "find" command versions. You can also use
{}
without escape characters.
~/ostechnix -- target directory to save the matching files.
\;
-- indicates it that the commands to be executed are now complete, and to
carry out the command again on the next match.
This command will find and copy all mp3 type files from ~/Linux directory to ~/ostechnix directory.
And also it preserves the same directory structure in the target directory.
You can verify it using "tree" command at both locations like below.
As you see in the above output, the destination directory only has the mp3 files and its directory
structure is same as source directory.
If you are doing this from some other location, specify the full path of source directory like
below.
Method 2 -- Copy specific file types while
preserving directory structure using Rsync
Rsync
is a powerful tool used to/from local and remote systems. To copy certain
type of files from one directory to another while keeping the parent directory structure, run:
-a
-- archive mode to preserve almost everything (including symlinks,
modification dates, file permissions, owners etc.)
-m, --prune-empty-dirs
-- prune empty directories from source tree. If you want
to include empty directories, just remove this option from the above command.
--include="*/" --include="*.mp3″ --exclude="*"
-- To include only specific files,
you first need to include those specific files, then exclude all other files. In our case, we have
included *.mp3 files and exclude everything else.
the Midnight Commander's built-in editor turned out to be. Below is one of the features of
mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through
the code.
Code Navigation Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility,
for this you need to run the following commands:
$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -
or $ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e
-L-
After the utility completes, a TAGS file will appear in the root directory of our project,
which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the
functions of variables or properties of the object under study.
Using
Imagine that we need to determine the place where the definition of the locked property
of an edit object is located in some source code of a rather large project.
/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}
Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the
internal mc editor (mcedit)?
Isaiah ,
Press the following keys in order, one at a time:
F9 Activates the top menu.
o Selects the Option menu.
c Opens the configuration dialog.
i Toggles the use internal edit option.
s Saves your preferences.
Hurnst , 2014-06-21 02:34:51
Run MC as usual. On the command line right above the bottom row of menu selections type
select-editor . This should open a menu with a list of all of your installed
editors. This is working for me on all my current linux machines.
, 2010-12-09 18:07:18
You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor
You will get an list of the installed editors on your system, and you can chose your
favorite.
AntonioK , 2015-01-27 07:06:33
If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit
> ,
Open Midnight Commander, go to Options -> Configuration and check "use internal editor"
Hit save and you are done.
Your hostname is a vital piece of
system information that you need to keep
track of as a system administrator.
Hostnames are the designations by which
we separate systems into easily
recognizable assets. This information is
especially important to make a note of
when working on a remotely managed
system. I have experienced multiple
instances of companies changing the
hostnames or IPs of storage servers and
then wondering why their data
replication broke. There are many ways
to change your hostname in Linux;
however, in this article, I'll focus on
changing your name as viewed by the
network (specifically in Red Hat
Enterprise Linux and Fedora).
Background
A quick bit of background. Before the
invention of DNS, your computer's
hostname was managed through the HOSTS
file located at
/etc/hosts
.
Anytime that a new computer was
connected to your local network, all
other computers on the network needed to
add the new machine into the
/etc/hosts
file in order to
communicate over the network. As this
method did not scale with the transition
into the world wide web era, DNS was a
clear way forward. With DNS configured,
your systems are smart enough to
translate unique IPs into hostnames and
back again, ensuring that there is
little confusion in web communications.
Modern Linux systems have
three different types of hostnames
configured. To minimize confusion, I
list them here and provide basic
information on each as well as a
personal best practice:
Transient hostname
:
How the network views your system.
Static hostname:
Set by the kernel.
Pretty hostname:
The
user-defined hostname.
It is recommended to pick a
pretty
hostname that is unique and
not easily confused with other systems.
Allow the transient and static names to
be variations on the pretty, and you
will be good to go in most
circumstances.
Working with hostnames
Now, let's look at how to view your
current hostname. The most basic command
used to see this information is
hostname
-f
. This command displays the
system's fully qualified domain name
(FQDN). To relate back to the three
types of hostnames, this is your
transient
hostname. A better way,
at least in terms of the information
provided, is to use the
systemd
command
hostnamectl
to view
your transient hostname and other system
information:
Image
Before moving on from the
hostname
command, I'll show
you how to use it to change your
transient hostname. Using
hostname
<x>
(where
x
is the
new hostname), you can change your
network name quickly, but be careful. I
once changed the hostname of a
customer's server by accident while
trying to view it. That was a small but
painful error that I overlooked for
several hours. You can see that process
below:
Image
It is also possible to use the
hostnamectl
command to change
your hostname. This command, in
conjunction with the right flags, can be
used to alter all three types of
hostnames. As stated previously, for the
purposes of this article, our focus is
on the transient hostname. The command
and its output look something like this:
Image
The final method to look at is the
sysctl
command. This
command allows you to change the kernel
parameter for your transient name
without having to reboot the system.
That method looks something like this:
Image
GNOME tip
Using GNOME, you can go to
Settings -> Details
to view and
change the static and pretty hostnames.
See below:
Image
Wrapping up
I hope that you found this
information useful as a quick and easy
way to manipulate your machine's
network-visible hostname. Remember to
always be careful when changing system
hostnames, especially in enterprise
environments, and to document changes as
they are made.
Want to try out Red Hat
Enterprise Linux?
Download
it now for free.
Topics:
Linux
Tyler Carrigan
Tyler is a community manager at
Enable Sysadmin, a submarine
veteran, and an all-round tech
enthusiast! He was first
introduced to Red Hat in 2012 by
way of a Red Hat Enterprise
Linux-based combat system inside
the USS Georgia Missile Control
Center.
More about me
A micro data center ( MDC ) is a smaller or containerized (modular) data center architecture that is
designed for computer workloads not requiring traditional facilities. Whereas the size may vary
from rack to container, a micro data center may include fewer than four servers in a single
19-inch rack. It may come with built-in security systems, cooling systems, and fire protection.
Typically there are standalone rack-level systems containing all the components of a
'traditional' data center, [1] including in-rack
cooling, power supply, power backup, security, fire and suppression. Designs exist where energy
is conserved by means of temperature chaining , in combination
with liquid cooling. [2]
In mid-2017, technology introduced by the DOME project was demonstrated enabling 64
high-performance servers, storage, networking, power and cooling to be integrated in a 2U 19"
rack-unit. This packaging, sometimes called 'datacenter-in-a-box' allows deployments in spaces
where traditional data centers do not fit, such as factory floors ( IOT ) and dense city centers, especially
for edge-computing
and edge-analytics.
MDCs are typically portable and provide plug and play features. They can be rapidly
deployed indoors or outdoors, in remote locations, for a branch office, or for temporary use in
high-risk zones. [3] They enable
distributed
workloads , minimizing downtime and increasing speed of response.
A micro data center, a mini version of a data center rack, could work as edge computing
takes hold in various industries. Here's a look at the moving parts behind the micro data
center concept.
As the number of places where we store data increases, the basic concept of what is referred
to as the 3-2-1 rule often gets forgotten. This is a problem, because the 3-2-1 rule is easily
one of the most foundational concepts for designing . It's important to understand why the rule
was created, and how it's currently being interpreted in an increasingly tapeless
world.
What is the 3-2-1 rule for backup?
The 3-2-1 rule says there should be at least three copies or versions of data stored on two
different pieces of media, one of which is off-site. Let's take a look at each of the three
elements and what it addresses.
3 copies or versions: Having at least three different versions of your data over
different periods of time ensures that you can recover from accidents that affect multiple
versions. Any good backup system will have many more than three copies.
2 different media: You should not have both copies of your data on the same media.
Consider, for example, Apple's Time Machine. You can fool it using Disc Utility to split your
hard drive into two virtual volumes, and then use Time Machine to backup the first volume to
the "second" volume. If the primary drive fails, the backup will fail as well. This is why
you always have the backup on different media than the original.
1 backup off-site: A speaker at a conference once said he didn't like tapes because he
put them in a box on top of a server, and they melted when the server caught fire. The
problem wasn't tape; the problem was he put his backups on top of his server. Your backup
copies, or at least one version of them, should be stored in a different physical location
than the thing you are backing up.
Mind the air gap
An air gap is a way of securing a copy of data by placing it on a machine on a network that
is physically separate from the data it is backing up. It literally means there is a gap of air
between the primary and the backup. This air gap accomplishes more than simple disaster
recovery; it is also very useful for protecting against hackers.
If all backups are accessible via the same computers that might be attacked, it is possible
that a hacker could use a compromised server to attack your backup server. By separating the
backup from the primary via an air gap, you make it harder for a hacker to pull that off. It's
still not impossible, just harder.
Everyone wants an air gap. The discussion these days is how to accomplish an air gap without
using tapes.Back in the days of tape backup, it was easy to provide an air gap. You made a
backup copy of your data and put it in a box, then you handed it to an Iron Mountain driver.
Instantly, there was a gap of air between your primary and your backup. It was close to
impossible for a hacker to attack both the primary and the backup.
That is not to say it was impossible; it just made it harder. For hackers to attack your
secondary copy, they needed to resort to a physical attack via social engineering. You might
think that tapes stored in an off-site storage facility would be impervious to a physical
attack via social engineering, but that is definitely not the case. (I have personally
participated in white hat attacks of off-site storage facilities, successfully penetrated them
and been left unattended with other people's backups.) Most hackers don't resort to physical
attacks because they are just too risky, so air-gapping backups greatly reduces the risk that
they will be compromised.
Faulty 3-2-1 implementations
Many things that pass for backup systems now do not pass even the most liberal
interpretation of the 3-2-1 rule. A perfect example of this would be various cloud-based
services that store the backups on the same servers and the same storage facility that they are
protecting, ignoring the "2" and the "1" in this important rule.
Costs estimate in optimistic spreadsheets and cost in actual life for
large scale moves tot he cloud are very different. Now companies that jumped into cloud bandwagon
discover that saving are illusionary and control over infrastructure is difficult. As well as
cloud provider now control their future.
Notable quotes:
"... On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises. ..."
"... Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. ..."
"... The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation. ..."
"... Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs. ..."
"... As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ..."
A new report by Capita shows that UK businesses are growing disillusioned by their move to
the cloud. It might be because they are focusing too much on the wrong goals.
Migrating to the cloud seems to be on every CIO's to-do list these days. But despite the
hype, almost 60% of UK businesses think that cloud has over-promised and under-delivered,
according to a report commissioned by consulting company Capita.
The research surveyed 200 IT decision-makers in the UK, and found that an overwhelming nine
in ten respondents admitted that cloud migration has been delayed in their organisation due to
"unforeseen factors".
On average, businesses started planning their migration to the cloud in 2015, and kicked
off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt
cloud computing : 61% of businesses started the move primarily to reduce the costs of
keeping data on-premises.
But with organisations setting aside only one year to prepare for migration, which the
report described as "less than adequate planning time," it is no surprise that most companies
have encountered stumbling blocks on their journey to the cloud.
Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of
hype about cloud in the past few years. Those who have started migrating really focused on cost
saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not
seen the outcomes they expected. "
Four years later, in fact, less than half (45%) of the companies' workloads and applications
have successfully migrated, according to Capita. A meager 5% of respondents reported that they
had not experienced any challenge in cloud migration; but their fellow IT leaders blamed
security issues and the lack of internal skills as the main obstacles they have had to tackle
so far.
Half of respondents said that they had to re-architect more workloads than expected to
optimise them for the cloud. Afghan noted that many businesses have adopted a "lift and shift"
approach, taking everything they were storing on premises and shifting it into the public
cloud. "Except in some cases, you need to re-architect the application," said Afghan, "and now
it's catching up with organisations."
The challenges "continue to spiral," noted Capita's report, and they are not going away;
what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has
been more expensive than initially thought. The trend is not only confined to the UK: the
financial burden of moving to the cloud is a global concern. Research firm Canalys found that
organisations splashed out a record $107 billion (£83 billion) for cloud computing
infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the
next five years. Afghan also pointed to recent research by Gartner, which predicted that
through 2020, 80% of
organisations will overshoot their cloud infrastructure budgets because of their failure to
manage cost optimisation.
Infrastructure, however, is not the only cost of moving to the cloud. IDC analysed the
overall spending on cloud services, and predicted that investments will reach $500 billion
(£388.4 billion) globally by 2023. Clearly, the escalating costs of switching to the
cloud is coming as a shock to some businesses - especially so because they started the move to
cut costs.
Afghan said: "From speaking to clients, it is pretty clear that cloud expense is one of
their chief concerns. The main thing on their minds right now is how to control that spend."
His response to them, he continued, is better planning. "If you decide to move an application
in the cloud, make sure you architect it so that you get the best return on investment," he
argued. "And then monitor it. The cloud is dynamic - it's not a one-off event."
Capita's research did found that IT leaders still have faith in the cloud, with the majority
(86%) of respondents agreeing that the benefits of the cloud will outweigh its downsides. But
on the other hand, only a third of organisations said that labour and logistical costs have
decreased since migrating; and a minority (16%) said they were "extremely satisfied" with the
move.
"Most organisations have not yet seen the full benefits or transformative potential of their
cloud investments," noted the report.
As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of
cloud technology ...
One quick way to determine whether the command you are using is a bash built-in or not is to
use the command "command". Yes, the command is called "command". Try it with a -V (capital V)
option like this:
$ command -V command
command is a shell builtin
$ command -V echo
echo is a shell builtin
$ command -V date
date is hashed (/bin/date)
When you see a "command is hashed" message like the one above, that means that the command
has been put into a hash table for quicker lookup.
... ... ...How to tell what
shell you're currently using
If you switch shells you can't depend on $SHELL to tell you what shell you're currently
using because $SHELL is just an environment variable that is set when you log in and doesn't
necessarily reflect your current shell. Try ps -p $$ instead as shown in these examples:
Built-ins are extremely useful and give each shell a lot of its character. If you use some
particular shell all of the time, it's easy to lose track of which commands are part of your
shell and which are not.
Differentiating a shell built-in from a Linux executable requires only a little extra
effort.
For security reasons, it defaults to "" , which disables explainshell integration. When set, this extension will
send requests to the endpoint and displays documentation for flags.
Once https://github.com/idank/explainshell/pull/125
is merged, it would be possible to set this to "https://explainshell.com" , however doing this is not recommended as
it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running
Docker image.
The granddaddy of HTML tools, with support for modern standards.
There used to be a fork called tidy-html5 which since became the official thing. Here is
its GitHub repository .
Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects
and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to
modern standards.
For your needs, here is the command line to call Tidy:
tidy inputfile.html
Paul Brit ,
Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly
installed.
brew install tidy-html5
Original reply:
Tidy from OS X doesn't support HTML5 . But there is experimental
branch on Github which does.
To get it:
brew tap homebrew/dupes
brew install tidy --HEAD
brew untap homebrew/dupes
That's it! Have fun!
Boris , 2019-11-16 01:27:35
Error: No available formula with the name "tidy" . brew install
tidy-html5 works. – Pysis Apr 4 '17 at 13:34
I tried to rm -rf a folder, and got "device or resource busy".
In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock
this" method, and not complete articles like this one .
Although they're useful, I'm currently interested in just ASimpleMethodThatWorks)
camh , 2011-04-13 09:22:46
The tool you want is lsof , which stands for list open files .
It has a lot of options, so check the man page, but if you want to see all open files under a directory:
lsof +D /path
That will recurse through the filesystem under /path , so beware doing it on large directory trees.
Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.
kip2 , 2014-04-03 01:24:22
sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:
umount /path
BillThor ,
I use fuser for this kind of thing. It will list which process is using a file or files within a mount.
user73011 ,
Here is the solution:
Go into the directory and type ls -a
You will find a .xyz file
vi .xyz and look into what is the content of the file
ps -ef | grep username
You will see the .xyz content in the 8th column (last row)
kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the
8th column
Now try to delete the folder or file.
Choylton B. Higginbottom ,
I had this same issue, built a one-liner starting with @camh recommendation:
The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used
-9 on kill, others might have safer options.
user5359531 ,
I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem,
since the files are typically named like .nfs000000123089abcxyz .
My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file
will have been removed automatically, at which point I am free to delete the directory.
This typically happens in directories where I am installing or compiling software libraries.
gloriphobia , 2017-03-23 12:56:22
I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and
fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused
for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !
By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the
mount command, i.e. sudo umount path
Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it
by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder
again and I could delete it.
Hopefully this can help someone else who comes across this problem!
bil , 2018-04-04 14:10:20
Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved
it, but this
ps -ef | grep name-of-busy-dir
Showed me the process and the PID (column two).
sudo kill -15 pid-here
fixed it.
Prabhat Kumar Singh , 2017-08-01 08:07:36
If you have the server accessible, Try
Deleting that dir from the server
Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.
Example of my second to day, hour, minute, second converter:
# convert seconds to day-hour:min:sec
convertsecs2dhms() {
((d=${1}/(60*60*24)))
((h=(${1}%(60*60*24))/(60*60)))
((m=(${1}%(60*60))/60))
((s=${1}%60))
printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
# PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
# printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`
# OUTPUT WOULD BE LIKE THIS (If none pretty printf used):
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used):
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s
Basile Starynkevitch ,
If $i represents some date in second since the Epoch, you could display it with
date -u -d @$i +%H:%M:%S
but you seems to suppose that $i is an interval (e.g. some duration) not a
date, and then I don't understand what you want.
Shilv , 2016-11-24 09:18:57
I use C shell, like this:
#! /bin/csh -f
set begDate_r = `date +%s`
set endDate_r = `date +%s`
set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`
echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time
zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases
date -d @"$i" +%T
Rsync provides many options for altering the default behavior of the utility. We have
already discussed some of the more necessary flags.
If you are transferring files that have not already been compressed, like text files, you
can reduce the network transfer by adding compression with the -z option:
rsync -az source destination
The
-P
flag is very helpful. It combines the flags
–progress
and
–partial
.
The first of these gives you a progress bar for the transfers and the second allows you to resume interrupted transfers:
rsync -azP source destination
If we run the command again, we will get a shorter output, because no changes have been made. This illustrates
rsync's ability to use modification times to determine if changes have been made.
rsync -azP source destination
We can update the modification time on some of the files and see that rsync intelligently re-copies only the changed
files:
touch dir1/file{1..10}
rsync -azP source destination
In order to keep two directories truly in sync, it is necessary to delete files from the destination directory if
they are removed from the source. By default, rsync does not delete anything from the destination directory.
We can change this behavior with the
–delete
option. Before using this option, use the
–dry-run
option and do testing to prevent data loss:
rsync -a --delete source destination
If you wish to exclude certain files or directories located inside a directory you are syncing, you can do so by
specifying them in a comma-separated list following the
–exclude=
option:
rsync -a --exclude= pattern_to_exclude source destination
If we have specified a pattern to exclude, we can override that exclusion for files that match a different pattern by
using the
–include=
option.
rsync -a --exclude= pattern_to_exclude --include=
pattern_to_include source destination
Finally, rsync's
--backup
--backup-dir
rsync -a --delete --backup --backup-dir= /path/to/backups
/path/to/source destination
This tutorial describes how to setup a local Yum repository on CentOS 7 system. Also, the
same steps should work on RHEL and Scientific Linux 7 systems too.
If you have to install software, security updates and fixes often in multiple systems in
your local network, then having a local repository is an efficient way. Because all required
packages are downloaded over the fast LAN connection from your local server, so that it will
save your Internet bandwidth and reduces your annual cost of Internet.
In this tutorial, I use two systems as described below:
Yum Server OS : CentOS 7 (Minimal Install)
Yum Server IP Address : 192.168.1.101
Client OS : CentOS 7 (Minimal Install)
Client IP Address : 192.168.1.102
Prerequisites
First, mount your CentOS 7 installation DVD. For example, let us mount the installation
media on /mnt directory.
mount /dev/cdrom /mnt/
Now the CentOS installation DVD is mounted under /mnt directory. Next install vsftpd package
and let the packages available over FTP to your local clients.
To do that change to /mnt/Packages directory:
cd /mnt/Packages/
Now install vsftpd package:
rpm -ivh vsftpd-3.0.2-9.el7.x86_64.rpm
Enable and start vsftpd service:
systemctl enable vsftpd
systemctl start vsftpd
We need a package called "createrepo" to create our local repository. So let us install it
too.
If you did a minimal CentOS installation, then you might need to install the following
dependencies first:
It's time to build our local repository. Create a storage directory to store all packages
from CentOS DVD's.
As I noted above, we are going to use a FTP server to serve all packages to client systems.
So let us create a storage location in our FTP server pub directory.
mkdir /var/ftp/pub/localrepo
Now, copy all the files from CentOS DVD(s) i.e from /mnt/Packages/ directory to the
"localrepo" directory:
cp -ar /mnt/Packages/*.* /var/ftp/pub/localrepo/
Again, mount the CentOS installation DVD 2 and copy all the files to /var/ftp/pub/localrepo
directory.
Once you copied all the files, create a repository file called "localrepo.repo" under
/etc/yum.repos.d/ directory and add the following lines into the file. You can name this file
as per your liking:
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===============================================================================================================================================================
Package Arch Version Repository Size
===============================================================================================================================================================
Installing:
httpd x86_64 2.4.6-17.el7.centos.1 localrepo 2.7 M
Installing for dependencies:
apr x86_64 1.4.8-3.el7 localrepo 103 k
apr-util x86_64 1.5.2-6.el7 localrepo 92 k
httpd-tools x86_64 2.4.6-17.el7.centos.1 localrepo 77 k
mailcap noarch 2.1.41-2.el7 localrepo 31 k
Transaction Summary
===============================================================================================================================================================
Install 1 Package (+4 Dependent packages)
Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]:
Disable Firewall And SELinux:
As we are going to use the local repository only in our local area network, there is no need
for firewall and SELinux. So, to reduce the complexity, I disabled both Firewalld and
SELInux.
To disable the Firewalld, enter the following commands:
Step 2: Run TestDisk and create a new testdisk.log file
Use the following command in order to run the testdisk command line utility:
$ sudo testdisk
The output will give you a description of the utility. It will also let you create a testdisk.log file. This
file will later include useful information about how and where your lost file was found, listed and resumed.
The above output gives you three options about what to do with this file:
Create: (recommended)- This option lets you create a new log file.
Append: This option lets you append new information to already listed information in this file from any
previous session.
No Log: Choose this option if you do not want to record anything about the session for later use.
Important:
TestDisk is a pretty intelligent tool. It does know that many beginners will
also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should
be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can
select an option through the up and down arrow keys and then entering to make your choice.
In the above output, I would opt for creating a new log file. The system might ask you the password for sudo
at this point.
Step 3: Select your recovery drive
The utility will now display a list of drives attached to your system. In my case, it is showing my hard
drive as it is the only storage device on my system.
Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above
screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.
Step 4: Select Partition Table Type of your Selected Drive
Now that you have selected a drive, you need to specify its partition table type of your on the following
screen:
Recovering lost files is only one of the features of testdisk, the utility offers much more than that.
Through the options displayed in the above screenshot, you can select any of those features. But here we are
interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit
enter.
In this utility if you reach a point you did not intend to, you can go back by using the q key.
Step 6: Select the drive partition where you lost the file
If your selected drive has multiple partitions, the following screen lets you choose the relevant one from
them.
<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />
I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the
options shown at the bottom of the screen.
This will list all the directories on your partition.
Step 7: Browse to the directory from where you lost the file
When the testdisk utility displays all the directories of your operating system, browse to the directory
from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home
directory. So I will browse to home:
Tip: You can use the left arrow to go back to the previous directory.
When you have reached your required directory, you will see the deleted files in colored or highlighted
form.
And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this
as I had to illustrate the whole process to you.
By now, you must have found your lost file in the list. Use the C option to copy the selected file. This
file will later be restored to the location you will specify in the next step:
Step 9: Specify the location where the found file will be restored
Now that we have copied the lost file that we have now found, the testdisk utility will display the
following screen so that we can specify where to restore it.
You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your
desired location.
I am specifically selecting the location from where I lost the file, my Downloads folder:
See the text in green in the above screenshot? This is actually great news. Now my file is restored on the
specified location.
This might seem to be a slightly long process but it is definitely worth getting your lost file back. The
restored file will most probably be in a locked state. This means that only an authorized user can access and
open it.
We all need this tool time and again, but if you want to delete it till you further need it you can do so
through the following command:
$ sudo apt-get remove testdisk
You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!
You probably heard about cheat.sh . I use this service everyday! This is one of
the useful service for all Linux users. It displays concise Linux command examples.
For instance, to view the curl command cheatsheet , simply run the following command from your console:
$ curl cheat.sh/curl
It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you
the cheatsheets of most Linux and unix commands in couple seconds.
Want to know the meanig of an English word? Here is how you can get the meaning of a word gustatory
$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <[email protected]>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing
You can share texts via some console services. These text sharing services are often useful for sharing code.
The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser
by navigating to the URL http://ix.io/2bCA
Not just text, we can even share files to anyone using a console service called filepush .
$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 72 0 0 100 72 0 54 0:00:01 0:00:01 --:--:-- 54http://filepush.co/8x6h/ostechnix.txt
100 110 100 38 100 72 27 53 0:00:01 0:00:01 --:--:-- 81
The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating
to the link http://filepush.co/8x6h/ostechnix.txt
Another text sharing console service is termbin :
$ echo "Welcome To OSTechNix!" | nc termbin.com 9999
There is also another console service named
transfer.sh . But it doesn't
work at the time of writing this guide.
Browser
There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using
command:
$ ssh brow.sh
Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but
some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer
the following guide for more details.
I can't figure out how to disable the startup graphic in centos 7 64bit.
In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no
grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of
users like me who want a text scroll instead.</moan>
Thanks for any help.
I can't figure out how to disable the startup graphic in centos 7 64bit.
In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no
grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of
users like me who want a text scroll instead.</moan>
Thanks for any help. Top
The file to amend now is /boot/grub2/grub.cfg and also
/etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o
/boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg
file directly though your changes will be wiped out next kernel install if you don't also edit
the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
The preferred method to do this is using the command plymouth-set-default-theme.
If you enter this command, without parameters, as user root you'll see something like
>plymouth-set-default-theme
charge
details
text
This lists the themes installed on your computer. The default is 'charge'. If you want to
see the boot up details you used to see in version 6, try
>plymouth-set-default-theme details
Followed by the command
>dracut -f
Then reboot.
This process modifies the boot loader so you won't have to update your grub.conf file
manually everytime for each new kernel update.
There are numerous themes available you can download from CentOS or in general. Just google
'plymouth themes' to see other possibilities, if you're looking for graphics type screens.
Top
Editing /etc/default/grub to remove rhgb quiet makes it permanent too.
CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
I tried both TrevorH's and LarryG's methods, and LarryG wins.
Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want,
but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put
"rhgb quiet" back in and got my 1920x1080 back.
Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting
without loss of display resolution. Thanks LarryG! Top
I have used this mod to get back the details for grub boot, thanks to
all for that info.
However when I am watching it fills the page and then rather than scrolling up as it did in
V5 it blanks and starts again at the top. Of course there is FAIL message right before it
blanks that I want to see and I can't slam the Scroll Lock fast
enough to catch it. Anyone know how to get the details to scroll up rather than the blank and
re-write?
Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the
screen like you used to be able to (it was a design choice, due to parallel daemon launching,
apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x -
perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and
parse the log at you leisure. Top
aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you
can't pause the screen like you used to be able to (it was a design choice, due to parallel
daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x -
perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot),
and parse the log at you leisure.
Thanks for the followup aks. Actually I have found that the Scroll Lock
does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it
before the screen blanks and then starts writing again. What I am really wondering is how to
get the screen to scroll up when it gets to the bottom of the screen rather than blanking and
starting to write again at the top. That is annoying!
Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It
used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to
deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using
Precise, but I suspect a solution for 11.10 would work just as well.
Easiest quick fix is to edit the grub line as you boot.
Hold down the shift key so you see the menu. Hit the e key to edit
Edit the 'linux' line, remove the 'quiet' and 'splash'
To disable it in the long run
Edit /etc/default/grub
Change the line – GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to
GRUB_CMDLINE_LINUX_DEFAULT=""
And then update grub
sudo update-grub
Panther , 2016-10-27 15:43:04
Removing quiet and splash removes the splash, but I still only have a purple screen with no
text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at
22:25
Tuminoid ,
How about pressing CTRL+ALT+F2 for console allowing you to see whats going on..
You can go back to GUI/Plymouth by CTRL+ALT+F7 .
Don't have my laptop here right now, but IIRC Plymouth has upstart job in
/etc/init , named plymouth???.conf, renaming that probably achieves what you
want too more permanent manner.
Then, run the following to check the common 2000 ports, which handle the common TCP and UDP
services. Here, -Pn is used to skip the ping scan after assuming that
the host is up:
$ sudo nmap -sS -sU -PN <Your-IP>
The results look like this:
...
Note: The -Pn combo is also useful for checking if the host firewall is
blocking ICMP requests or not.
Also, as an extension to the above command, if you need to scan all ports instead of only
the 2000 ports, you can use the following to scan ports from 1-66535:
$ sudo nmap -sS -sU -PN -p 1-65535 <Your-IP>
The results look like this:
...
You can also scan only for TCP ports (default 1000) by using the following:
$ sudo nmap -sT <Your-IP>
The results look like this:
...
Now, after all of these checks, you can also perform the "all" aggressive scans with the
-A option, which tells Nmap to perform OS and version checking using
-T4 as a timing template that tells Nmap how fast to perform this scan (see the
Nmap man page for more information on timing templates):
$ sudo nmap -A -T4 <Your-IP>
The results look like this, and are shown here in two parts:
...
There you go. These are the most common and useful Nmap commands. Together, they provide
sufficient network, OS, and open port information, which is helpful in troubleshooting. Feel
free to comment with your preferred Nmap commands as well.
timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given
period of time. In other words, timeout allows you to run a command with a time limit. The timeout command
is a part of the GNU core utilities package which is installed on almost any Linux distribution.
It is handy when you want to run a command that doesn't have a built-in timeout option.
In this article, we will explain how to use the Linux timeout command.
If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is
reached. You can specify which signal to send using the -s ( --signal ) option.
For example, to send SIGKILL to the ping command after one minute you would use:
sudo timeout -s SIGKILL ping 8.8.8.8
The signal can be specified by its name like SIGKILL or its number like 9 . The following command is
identical to the previous one:
sudo timeout -s 9 ping 8.8.8.8
To get a list of all available signals, use the kill -l command:
SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes.
In that situations, the process continues to run after the termination signal is send.
To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time
period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL
signal to the managed program that cannot be caught or ignored.
In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after
ten seconds:
The timeout command is used to run a given command with a time limit.
timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only
with two arguments, the duration, and the managed command.
If you have any questions or feedback, feel free to leave a comment.
Id command in Linux
<img alt="" src=/post/id-command-in-linux/featured.jpg>
Write a comment Please enable
JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a>
ESC 2020 Linuxize.com Privacy Policy Contact
<div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1"
width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>
Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve
monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
Using watch command without any options will use the default parameter of 2.0 second refresh intervals.
As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the
free command
. This will give you up to date information about our system's memory usage.
watch free
Yes, it is that simple my friends.
Every 2.0s: free pop-os: Wed Dec 25 13:47:59 2019
total used free shared buff/cache available
Mem: 32596848 3846372 25571572 676612 3178904 27702636
Swap: 0 0 0
Adjust refresh rate of watch command
You can easily change how quickly the output is updated using the
-n
flag.
watch -n 10 free
Every 10.0s: free pop-os: Wed Dec 25 13:58:32 2019
total used free shared buff/cache available
Mem: 32596848 4522508 24864196 715600 3210144 26988920
Swap: 0 0 0
This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our
output.
Remove title or header info from watch command output
watch -t free
The -t flag removes the title/header information to clean up output. The information will still refresh every 2
seconds but you can change that by combining the -n option.
total used free shared buff/cache available
Mem: 32596848 3683324 25089268 1251908 3824256 27286132
Swap: 0 0 0
Highlight the changes in watch command output
You can add the
-d
option and watch will automatically highlight changes for us. Let's take a
look at this using the date command. I've included a screen capture to show how the highlighting behaves.
<img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch
You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of
this software. Pipes rely on the
|
symbol. Not coincidentally, this is called a pipe symbol or
sometimes a vertical bar symbol.
watch "cat /var/log/syslog | tail -n 3"
While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2
seconds and any changes will be displayed.
Every 2.0s: cat /var/log/syslog | tail -n 3 pop-os: Wed Dec 25 15:18:06 2019
Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.
Conclusion
Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.
This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses
that you would like to share, let us know about them in the comments.
If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and
netstat . The new replacements are ip , dig , and ss , respectively. It's time
to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth
a mention here because part of netstat 's functionality has been replaced by ip . This article covers the
essentials for the ss command so that you don't have to dig (no pun intended) for them.
Formally, ss is the socket statistics command that replaces netstat . In this article, I provide
netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it
easy for us to transition into ss from netstat by making some of netstat 's options operate
in much the same fashion in ss .
For example, to display TCP sockets, use the -t option:
$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 rhel8:ssh khess-mac:62036 ESTABLISHED
$ ss -t
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.65:ssh 192.168.1.94:62036
You can see that the information given is essentially the same, but to better mimic what you see in the netstat command,
use the -r (resolve) option:
$ ss -tr
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 rhel8:ssh khess-mac:62036
And to see port numbers rather than their translations, use the -n option:
$ ss -ntr
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 rhel8:22 khess-mac:62036
It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So,
try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly
surprised at the results.
For example, the netstat command with the old standby options -an yield comparable results (which are
too long to show here in full):
The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So,
there are layout differences even though the displayed information is really the same.
If you're wondering which netstat commands have been replaced by the ip command, here's one for you:
$ netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 all-systems.mcast.net
enp0s3 1 all-systems.mcast.net
lo 1 ff02::1
lo 1 ff01::1
enp0s3 1 ff02::1:ffa6:ab3e
enp0s3 1 ff02::1:ff8d:912c
enp0s3 1 ff02::1
enp0s3 1 ff01::1
$ ip maddr
1: lo
inet 224.0.0.1
inet6 ff02::1
inet6 ff01::1
2: enp0s3
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
link 33:33:ff:8d:91:2c
link 33:33:ff:a6:ab:3e
inet 224.0.0.1
inet6 ff02::1:ffa6:ab3e
inet6 ff02::1:ff8d:912c
inet6 ff02::1
inet6 ff01::1
The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can
try this one for yourself to compare the two:
$ netstat -s
Ip:
Forwarding: 2
6231 total packets received
2 with invalid addresses
0 forwarded
0 incoming packets discarded
3104 incoming packets delivered
2011 requests sent out
243 dropped because of missing route
<truncated>
$ ss -s
Total: 182
TCP: 3 (estab 1, closed 0, orphaned 0, timewait 0)
Transport Total IP IPv6
RAW 1 0 1
UDP 3 2 1
TCP 3 2 1
INET 7 4 3
FRAG 0 0 0
If you figure out how to display the same info with ss , please let me know.
Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the
netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly
why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.
What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same
question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're
significantly improving an existing utility, why bother deprecating the other?
There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually
embrace ss as its successor.
Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since
1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open
source and other topics. More about me
Thirteen Useful Tools for Working with Text on the Command Line
By
Karl Wakim
Posted on
Jan 9, 2020
Jan 9, 2020
in
Linux
GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core
utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used
correctly.
Here are thirteen powerful text manipulation tools every command-line user should know.
1. cat
Cat was designed to con
cat
enate files but is most
often used to display a single file. Without any arguments, cat reads standard input until
Ctrl
+
D
is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly
specified with a
-
.
Cat has a number of useful options, notably:
-A
prints "$" at the end of each line and displays non-printing characters using caret notation.
-n
numbers all lines.
-b
numbers lines that are not blank.
-s
reduces a series of blank lines to a single blank line.
In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.
cat -n file1 - file3
2. sort
As its name suggests,
sort
sorts file contents
alphabetically and numerically.
3. uniq
Uniq takes a sorted file and removes duplicate lines. It is often chained with
sort
in a single command.
4. comm
Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain
lines unique to the first and second file respectively, and the third displays those found in both files.
5. cut
Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file
or from standard input if no file is specified.
Cutting by character position
The
-c
option specifies a single character position or
one or more ranges of characters.
For example:
-c 3
:
the 3rd character.
-c 3-5
:
from the 3rd to the 5th character.
-c -5
or
-c 1-5
: from the 1st to the 5th character.
-c 5-
:
from the 5th character to the end of the line.
-c 3,5-7
:
the 3rd and from the 5th to the 7th character.
Cutting by field
Fields are separated by a delimiter consisting of a single character, which is specified with the
-d
option. The
-f
option selects a field position or one or more ranges of
fields using the same format as above.
6. dos2unix
GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line
feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It
converts CRLF terminators to LF.
In the following example, the
file
command is used to
check the text format before and after using
dos2unix
.
7. fold
To make long lines of text easier to read and handle, you can use
fold
, which wraps lines to a specified width.
Fold strictly matches the specified width by default, breaking words where necessary.
fold -w 30 longline.txt
If breaking words is undesirable, you can use the
-s
option to break at spaces.
fold -w 30 -s longline.txt
8. iconv
This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.
"input_encoding" is the encoding you are converting from.
"output_encoding" is the encoding you are converting to.
"output_file" is the filename iconv will save to.
"input_file" is the filename iconv will read from.
Note:
you can list the available encodings with
iconv -l
9. sed
sed is a powerful and flexible
s
tream
ed
itor, most commonly used to find and replace strings
with the following syntax.
The following command will read from the specified file (or standard input), replacing the parts of text that
match the
regular expression
pattern with the replacement string and outputting the result to the terminal.
sed s/pattern/replacement/g filename
To modify the original file instead, you can use the
-i
flag.
10. wc
The
wc
utility prints the number of bytes, characters,
words, or lines in a file.
11. split
You can use
split
to divide a file into smaller files,
by number of lines, by size, or to a specific number of files.
Splitting by number of lines
split -l num_lines input_file output_prefix
Splitting by bytes
split -b bytes input_file output_prefix
Splitting to a specific number of files
split -n num_files input_file output_prefix
12. tac
Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.
13. tr
The tr tool is used to translate or delete sets of characters.
A set of characters is usually either a string or ranges of characters. For instance:
Bash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard
shortcut based on the style you prefer below.
Check your active Bash keymap with:
bind -v | grep editing-mode
bind -v | grep keymap
To determine character sequence emitted by a pressed key in terminal, type Ctrl-v and then press the key. Check your
current bindings using:
bind -S
Bash Emacs Keymap (default)
Bind HSTR to a Bash key e.g. to Ctrl-r :
bind '"\C-r": "\C-ahstr -- \C-j"'
or Ctrl-Altr :
bind '"\e\C-r":"\C-ahstr -- \C-j"'
or Ctrl-F12 :
bind '"\e[24;5~":"\C-ahstr -- \C-j"'
Bind HSTR to Ctrl-r only if it is interactive shell:
if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fi
You can bind also other HSTR commands like --kill-last-command :
if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fi
Bash Vim Keymap
Bind HSTR to a Bash key e.g. to Ctrlr :
bind '"\C-r": "\e0ihstr -- \C-j"'
Zsh Emacs Keymap
Bind HSTR to a zsh key e.g. to Ctrlr :
bindkey -s "\C-r" "\eqhstr --\n"
Alias
If you want to make running of hstr from command line even easier, then define alias in your ~/.bashrc
:
alias hh=hstr
Don't forget to source ~/.bashrc to be able to to use hh command.
Colors
Let HSTR to use colors:
export HSTR_CONFIG=hicolor
or ensure black and white mode:
export HSTR_CONFIG=monochromatic
Default History View
To show normal history by default (instead of metrics-based view, which is default) use:
export HSTR_CONFIG=raw-history-view
To show favorite commands as default view use:
export HSTR_CONFIG=favorites-view
Filtering
To use regular expressions based matching:
export HSTR_CONFIG=regexp-matching
To use substring based matching:
export HSTR_CONFIG=substring-matching
To use keywords (substrings whose order doesn't matter) search matching (default):
export HSTR_CONFIG=keywords-matching
Make search case sensitive (insensitive by default):
export HSTR_CONFIG=case-sensitive
Keep duplicates in raw-history-view (duplicate commands are discarded by default):
export HSTR_CONFIG=duplicates
Static favorites
Last selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and
make favorite commands list static, then use the following configuration:
export HSTR_CONFIG=static-favorites
Skip favorites comments
If you don't want to show lines starting with # (comments) among favorites, then use the following configuration:
export HSTR_CONFIG=skip-favorites-comments
Blacklist
Skip commands when processing history i.e. make sure that these commands will not be shown in any view:
export HSTR_CONFIG=blacklist
Commands to be stored in ~/.hstr_blacklist file with trailing empty line. For instance:
cd
my-private-command
ls
ll
Confirm on Delete
Do not prompt for confirmation when deleting history items:
export HSTR_CONFIG=no-confirm
Verbosity
Show a message when deleting the last command from history:
export HSTR_CONFIG=verbose-kill
Show warnings:
export HSTR_CONFIG=warning
Show debug messages:
export HSTR_CONFIG=debug
Bash History Settings
Use the following Bash settings to get most out of HSTR.
Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file
size (default value is 500):
hh uses shell history to provide suggest box like functionality for commands used in the
past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number
of occurrences, length and timestamp. Favorite and frequently used commands can be
bookmarked . In addition hh allows removal of commands from history - for instance with a
typo or with a sensitive content.
export HH_CONFIG=hicolor # get more colors
shopt -s histappend # append new history items to .bash_history
export HISTCONTROL=ignorespace # leading space hides commands from history
export HISTFILESIZE=10000 # increase history file size (default is 500)
export HISTSIZE=${HISTFILESIZE} # increase history size (default is 500)
export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
# if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi
The prompt command ensures synchronization of the history between BASH memory and history
file.
export HISTFILE=~/.zsh_history # ensure history file visibility
export HH_CONFIG=hicolor # get more colors
bindkey -s "\C-r" "\eqhh\n" # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
izaak says:
March 12, 2010
at 11:06 am I would also add $ echo 'export HISTSIZE=10000' >> ~/.bash_profile
It's really useful, I think.
Dariusz says:
March 12, 2010
at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:
# Make sure all terminals save history
shopt -s histappend histreedit histverify
shopt -s no_empty_cmd_completion # bash>=2.04 only
# Whenever displaying the prompt, write the previous line to disk:
PROMPT_COMMAND='history -a'
#Use GREP color features by default: This will highlight the matched words / regexes
export GREP_OPTIONS='color=auto'
export GREP_COLOR='1;37;41′
Babar Haq says:
March 15, 2010
at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to
log the IP that command was run from?
Thanks in advance.
Anthony says:
August 21,
2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):
will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing
that out, Cadrian, I never knew about that before), all right there in your history output.
You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's
not helpful).
set |grep -i hist
HISTCONTROL=ignoreboth
HISTFILE=/home/cadrian/.bash_history
HISTFILESIZE=1000000000
HISTSIZE=10000000
So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)
TSI says:
March 21, 2010
at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file
of bash: http://tiswww.case.edu/php/chet/bash/NEWS
If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.
Sohail says:
January 13, 2012
at 7:05 am Hi
Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's)
timestamp.
Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In
other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers
your concern.
Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other
words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just
reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.
The command only lists the current date (Today) even for those commands which were executed on earlier days.
Any solutions ?
Regards
nitiratna nikalje says:
August 24, 2012
at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota
and system administration is over.iptables ,sendmail and dns is remaining.
Krishan says:
February 7,
2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where
as I ran the some command three before.
I want to collect the history of particular user everyday and want to send an email.I wrote below script.
for collecting everyday history by time shall i edit .profile file of that user echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
Script:
#!/bin/bash
#This script sends email of particular user
history >/tmp/history
if [ -s /tmp/history ]
then
mailx -s "history 29042014" </tmp/history
fi
rm /tmp/history
#END OF THE SCRIPT
Can any one suggest better way to collect particular user history for everyday
As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.
One involves a forensic analysis of over 100,000 lines of old C and assembly code from about 1990, and I have to work on Windows
XP.
The other is a hack to translate code written in weird language L1 into weird language L2 with a program written in scripting
language L3, where none of the L's even existed in 1990; this one uses Linux. Thus it's perhaps a bit surprising that I find myself
relying on much the same toolset for these very different tasks.
... ... ...
Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back
in time.
But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur
stuck in the past.
On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting
for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well
AIX & RS-6000 "How-To" Redbooks
-- "How-To" books on a wide range of AIX software and RS6000 hardware topics. Includes topics such as operating systems, system
management, communications, hardware, application solutions and development, databases, Internet, high availability clusters,
and scalable POWERparallel systems.
"The unfortunate reality of being a Systems Administrator is that sometime during your career, you will most likely run into
a user (or lus3r if you prefer) whom has an IQ of a diced carrot and demands that you drop everything to fix their system/email/whatever.
This article focuses on how to deal with these kinds of issues."
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
Copyright 1996-2021 by Softpanorama Society. www.softpanorama.org
was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP)
without any remuneration. This document is an industrial compilation designed and created exclusively
for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong
to respective owners. Quotes are made for educational purposes only
in compliance with the fair use doctrine.
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.