Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Programmable Keyboards
Checklists Perl Wiki as a System Administrator Tool Frontpage as a poor man personal knowledge management system Information is not knowledge Static Web site content generators    
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Recovery of LVM partitions Notes on hard drives partitioning for Linux Top Vulnerabilities in Linux Environment Root filesystem is mounted read only on boot
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoers File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Unix System Monitoring Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Open source politics: IBM acquires Red Hat Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bozos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workagolism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment, some new "for profit" technocult  like DevOps, and outsourcing.  Large swats of Linux knowledge (and many excellent  books)  were  by Red hat killed with the introduction of systemd. Especially affected are older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1. Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Excessive automation can be a problem. It increases the number of layers between fundamental process and sysadmin. and thus it makes troubleshooting much harder. Moreover often it does not produce tangible benefits in comparison with simpler tools while dramatically increasing the level of complexity of environment.  See Unix Configuration Management Tools for deeper discussion of this issue.
  3. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of "efficiency" which in reality is more connected with the size of top brass bonuses than anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case they are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. Often the only job they can find is job  in Home Depot, or similar retail outlets.  See Over 50 and unemployed
  4. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). "Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface shine is more important than the substance. The marketing brochures and manuals are no different from mainstream news media stories in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  5. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.  See Office Space (1999) - IMDb  for humorous take on IT culture.  That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has  tremendously more power in the decision-making process and eats up more of the corporate budget, while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks. Purchasing of "non-standard" software or hardware is often so complicated that it never tried even if benefits are tangible.
  6. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.  They are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink, and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand beach, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version or organizational change wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to switch to a different flavor of Linux,  or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  For a talented and not too old person staying employed in sysadmin profession is probably a mistake, or at least a very questionable decision.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Of course summer "dead season" at least partially exists, but it is rather short. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them is of course is the mark of any good sysadmin, but should not be the only source of new knowledge. Communication with colleagues who have high level of knowledge in selected areas is as important or even more important. But this is very difficult as often sysadmin works in isolation.  Professional groups like Linux user group exist mostly in metropolitan areas of large cities. Coronavirus made those groups even more problematic.

Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (AWS, Azure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course.  sysadmins are not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course, much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM.

Nobody is now surprised to see a server with 128GB of RAM, laptop with 16Gb of RAM, or cellphones with  4GB of RAM and 2GHZ CPU (Please note that IBM Pc stated with 1 MB of RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Hardware evolution while painful is inevitable and it changes the software landscape. Thanks God hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes) and progress now is mostly measured by the number of cores packed in the same die.

The there is other set of significant changes which is course not by progress of hardware (or software) but mainly by fashion and the desire of certain (and powerful) large corporations to entrench their market position. Such changes are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow. For example how long DevOps will remain in fashion.

Typically such techno-fashion lasts around a decade. After that it typically fades in oblivion,  or even is debunked, and former idols shattered (verification crazy is a nice example here). Fro example this strange re-invention of the ideas of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for women fashion.  Moreover sometimes it looks to me that the movie The Devil Wears Prada is a subtle parable on sysadmin work.

Add to this horrible job market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or often such a job position is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to fit this specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also often position are available mostly in place like New York of San Francisco, were both rent and property prices are high and growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done to enhance Red Hat marketing position, in the interests of the Red Hat and IBM brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL administration other than for a solid advance.  Of course, systemd  generated some backlash, but the position  of Red Hat as Microsoft of Linux allows them to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous Windows interface ecosystem and putting keyboard users into some disadvantage  (while preserving binary compatibility). Red Hat essentially did the same for server sysadmins.

Dr. Nikolai Bezroukov

P.S. See also

P.P.S. Here are my notes/reflection of sysadmin problems that often arise in rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Sep 25, 2020] Angry Bear " All My Children

Sep 25, 2020 | angrybearblog.com

Comments (1)

  1. Likbez , September 25, 2020 11:05 am

    That's pretty naive take on the subject.

    For example Microsoft success was by the large part determined its alliance with IBM in the creation of PC and then exploiting IBM ineptness to ride this via shred marketing and alliances and "natural monopoly" tendencies in IT. MS DOS was a clone of CP/M that was bought, extended and skillfully marketed. Zero innovation here.

    Both Microsoft and Apple rely of research labs in other companies to produce innovation which they then then produced and marketed. Even Steve Jobs smartphone was not an innovation per se: it was just a slick form factor that was the most successful in the market. All functionality existed in other products.

    Facebook was prelude to, has given the world a glimpse into, the future.

    From pure technical POV Facebook is mostly junk. It is a tremendous database of user information which users supply themselves due to cultivated exhibitionism. Kind of private intelligence company. The mere fact that software was written in PHP tells you something about real Zuckerberg level.

    Amazon created a usable interface for shopping via internet (creating comments infrastructure and a usable user account database ) but this is not innovation in any sense of the word. It prospered by stealing large part of Wall Mart logistic software (and people) and using Wall Mart tricks with suppliers. So Bezos model was Wall Mart clone on the Internet.

    Unless something is done, Bezos will soon be the most powerful man in the world.

    People like Bezos, Google founders, Zuckerberg to a certain extent are part of intelligence agencies infrastructure. Remember Prism. So implicitly we can assume that they all report to the head of CIA.

    Artificial Intelligence, AI, is another consequence of this era of innovation that demands our immediate attention.

    There is very little intelligence in artificial intelligence :-). Intelligent behavior of robots in mostly an illusion created by First Clark law:

    "Any sufficiently advanced technology is indistinguishable from magic." https://en.wikipedia.org/wiki/Clarke%27s_three_laws

    Most of amazing things that we see are the net result of tremendous raise of computing power of Neumann architecture machines.

    At some point quantity turns into quality.

[Sep 22, 2020] Taming the tar command -- Tips for managing backups in Linux by Gabby Taylor

Sep 18, 2020 | www.redhat.com
How to append or add files to a backup

In this example, we add onto a backup backup.tar . This allows you to add additional files to the pre-existing backup backup.tar .

# tar -rvf backup.tar /path/to/file.xml

Let's break down these options:

-r - Append to archive
-v - Verbose output
-f - Name the file

How to split a backup into smaller backups

In this example, we split the existing backup into smaller archived files. You can pipe the tar command into the split command.

# tar cvf - /dir | split --bytes=200MB - backup.tar

Let's break down these options:

-c - Create the archive
-v - Verbose output
-f - Name the file

In this example, the dir/ is the directory that you want to split the backup content from. We are making 200MB backups from the /dir folder.

How to check the integrity of a tar.gz backup

In this example, we check the integrity of an existing tar archive.

To test the gzip file is not corrupt:

#gunzip -t backup.tar.gz

To test the tar file content's integrity:

#gunzip -c backup.tar.gz | tar t > /dev/null

OR

#tar -tvWF backup.tar

Let's break down these options:

-W - Verify an archive file
-t - List files of archived file
-v - Verbose output

Use pipes and greps to locate content

In this example, we use pipes and greps to locate content. The best option is already made for you. Zgrep can be utilized for gzip archives.

#zgrep <keyword> backup.tar.gz

You can also use the zcat command. This shows the content of the archive, then pipes that output to a grep .

#zcat backup.tar.gz | grep <keyword>

Egrep is a great one to use just for regular file types.

[Sep 05, 2020] documentation - How do I get the list of exit codes (and-or return codes) and meaning for a command-utility

Sep 05, 2020 | unix.stackexchange.com

What exit code should I use?

There is no "recipe" to get the meanings of an exit status of a given terminal command.

My first attempt would be the manpage:

user@host:~# man ls 
   Exit status:
       0      if OK,

       1      if minor problems (e.g., cannot access subdirectory),

       2      if serious trouble (e.g., cannot access command-line argument).

Second : Google . See wget as an example.

Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .

Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs (64-78) can be found in /usr/include/sysexits.h (or: man sysexits on BSD):

0   /* successful termination */
64  /* base value for error messages */
64  /* command line usage error */
65  /* data format error */
66  /* cannot open input */
67  /* addressee unknown */
68  /* host name unknown */
69  /* service unavailable */
70  /* internal software error */
71  /* system error (e.g., can't fork) */
72  /* critical OS file missing */
73  /* can't create (user) output file */
74  /* input/output error */
75  /* temp failure; user is invited to retry */
76  /* remote error in protocol */
77  /* permission denied */
78  /* configuration error */
/* maximum listed value */

The above list allocates previously unused exit codes from 64-78. The range of unallotted exit codes will be further restricted in the future.

However above values are mainly used in sendmail and used by pretty much nobody else, so they aren't anything remotely close to a standard (as pointed by @Gilles ).

In shell the exit status are as follow (based on Bash):

According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters.

Please note that out of range exit values can result in unexpected exit codes (e.g. exit 3809 gives an exit code of 225, 3809 % 256 = 225).

See:

You will have to look into the code/documentation. However the thing that comes closest to a "standardization" is errno.h share improve this answer follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold badge 17 17 silver badges 25 25 bronze badges

PSkocik ,

thanks for pointing the header file.. tried looking into the documentation of a few utils.. hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at 9:13

[Aug 10, 2020] How to Run and Control Background Processes on Linux

Aug 10, 2020 | www.howtogeek.com

How to Run and Control Background Processes on Linux DAVE MCKAY @thegurkha
SEPTEMBER 24, 2019, 8:00AM EDT

A shell environment on a Linux computer.
Fatmawati Achmad Zaenuri/Shutterstock.com

Use the Bash shell in Linux to manage foreground and background processes. You can use Bash's job control functions and signals to give you more flexibility in how you run commands. We show you how.

How to Speed Up a Slow PC

https://imasdk.googleapis.com/js/core/bridge3.401.2_en.html#goog_863166184 All About Processes

Whenever a program is executed in a Linux or Unix-like operating system, a process is started. "Process" is the name for the internal representation of the executing program in the computer's memory. There is a process for every active program. In fact, there is a process for nearly everything that is running on your computer. That includes the components of your graphical desktop environment (GDE) such as GNOME or KDE , and system daemons that are launched at start-up.

Why nearly everything that is running? Well, Bash built-ins such as cd , pwd , and alias do not need to have a process launched (or "spawned") when they are run. Bash executes these commands within the instance of the Bash shell that is running in your terminal window. These commands are fast precisely because they don't need to have a process launched for them to execute. (You can type help in a terminal window to see the list of Bash built-ins.)

Processes can be running in the foreground, in which case they take over your terminal until they have completed, or they can be run in the background. Processes that run in the background don't dominate the terminal window and you can continue to work in it. Or at least, they don't dominate the terminal window if they don't generate screen output.

A Messy Example

We'll start a simple ping trace running . We're going to ping the How-To Geek domain. This will execute as a foreground process.

ping www.howtogeek.com

ping www.howtogeek.com in a terminal window

We get the expected results, scrolling down the terminal window. We can't do anything else in the terminal window while ping is running. To terminate the command hit Ctrl+C .

Ctrl+C

ping trace output in a terminal window

The visible effect of the Ctrl+C is highlighted in the screenshot. ping gives a short summary and then stops.

Let's repeat that. But this time we'll hit Ctrl+Z instead of Ctrl+C . The task won't be terminated. It will become a background task. We get control of the terminal window returned to us.

ping www.howtogeek.com
Ctrl+Z

effect of Ctrl+Z on a command running in a terminal window

The visible effect of hitting Ctrl+Z is highlighted in the screenshot.

This time we are told the process is stopped. Stopped doesn't mean terminated. It's like a car at a stop sign. We haven't scrapped it and thrown it away. It's still on the road, stationary, waiting to go. The process is now a background job .

The jobs command will list the jobs that have been started in the current terminal session. And because jobs are (inevitably) processes, we can also use the ps command to see them. Let's use both commands and compare their outputs. We'll use the T option (terminal) option to only list the processes that are running in this terminal window. Note that there is no need to use a hyphen - with the T option.

jobs
ps T

jobs command in a terminal window

The jobs command tells us:

The ps command tells us:

These are common values for the STAT column:

The value in the STAT column can be followed by one of these extra indicators:

We can see that Bash has a state of Ss . The uppercase "S" tell us the Bash shell is sleeping, and it is interruptible. As soon as we need it, it will respond. The lowercase "s" tells us that the shell is a session leader.

The ping command has a state of T . This tells us that ping has been stopped by a job control signal. In this example, that was the Ctrl+Z we used to put it into the background.

The ps T command has a state of R , which stands for running. The + indicates that this process is a member of the foreground group. So the ps T command is running in the foreground.

The bg Command

The bg command is used to resume a background process. It can be used with or without a job number. If you use it without a job number the default job is brought to the foreground. The process still runs in the background. You cannot send any input to it.

If we issue the bg command, we will resume our ping command:

bg

bg in a terminal window

The ping command resumes and we see the scrolling output in the terminal window once more. The name of the command that has been restarted is displayed for you. This is highlighted in the screenshot.

resumed ping background process with output in a terminal widow

But we have a problem. The task is running in the background and won't accept input. So how do we stop it? Ctrl+C doesn't do anything. We can see it when we type it but the background task doesn't receive those keystrokes so it keeps pinging merrily away.

Background task ignoring Ctrl+C in a terminal window

In fact, we're now in a strange blended mode. We can type in the terminal window but what we type is quickly swept away by the scrolling output from the ping command. Anything we type takes effect in the foregound.

To stop our background task we need to bring it to the foreground and then stop it.

The fg Command

The fg command will bring a background task into the foreground. Just like the bg command, it can be used with or without a job number. Using it with a job number means it will operate on a specific job. If it is used without a job number the last command that was sent to the background is used.

If we type fg our ping command will be brought to the foreground. The characters we type are mixed up with the output from the ping command, but they are operated on by the shell as if they had been entered on the command line as usual. And in fact, from the Bash shell's point of view, that is exactly what has happened.

fg

fg command mixed in with the output from ping in a terminal window

And now that we have the ping command running in the foreground once more, we can use Ctrl+C to kill it.

Ctrl+C

Ctrl+C stopping the ping command in a terminal window

We Need to Send the Right Signals

That wasn't exactly pretty. Evidently running a process in the background works best when the process doesn't produce output and doesn't require input.

But, messy or not, our example did accomplish:

When you use Ctrl+C and Ctrl+Z , you are sending signals to the process. These are shorthand ways of using the kill command. There are 64 different signals that kill can send. Use kill -l at the command line to list them. kill isn't the only source of these signals. Some of them are raised automatically by other processes within the system

Here are some of the commonly used ones.

We must use the kill command to issue signals that do not have key combinations assigned to them.

Further Job Control

A process moved into the background by using Ctrl+Z is placed in the stopped state. We have to use the bg command to start it running again. To launch a program as a running background process is simple. Append an ampersand & to the end of the command line.

Although it is best that background processes do not write to the terminal window, we're going to use examples that do. We need to have something in the screenshots that we can refer to. This command will start an endless loop as a background process:

while true; do echo "How-To Geek Loop Process"; sleep 3; done &

while true; do echo "How-To Geek Loop Process"; sleep 3; done & in a terminal window

We are told the job number and process ID id of the process. Our job number is 1, and the process id is 1979. We can use these identifiers to control the process.

The output from our endless loop starts to appear in the terminal window. As before, we can use the command line but any commands we issue are interspersed with the output from the loop process.

ls

output of the background loop process interspersed with output from other commands

To stop our process we can use jobs to remind ourselves what the job number is, and then use kill .

jobs reports that our process is job number 1. To use that number with kill we must precede it with a percent sign % .

jobs
kill %1

jobs and kill %1 in a terminal window

kill sends the SIGTERM signal, signal number 15, to the process and it is terminated. When the Enter key is next pressed, a status of the job is shown. It lists the process as "terminated." If the process does not respond to the kill command you can take it up a notch. Use kill with SIGKILL , signal number 9. Just put the number 9 between the kill command the job number.

kill 9 %1
Things We've Covered

RELATED: How to Kill Processes From the Linux Terminal

[Jul 30, 2020] ports tree

Jul 30, 2020 | opensource.com

. On Mac, use Homebrew .

For example, on RHEL or Fedora:

$ sudo dnf install tmux
Start tmux

To start tmux, open a terminal and type:

$ tmux

When you do this, the obvious result is that tmux launches a new shell in the same window with a status bar along the bottom. There's more going on, though, and you can see it with this little experiment. First, do something in your current terminal to help you tell it apart from another empty terminal:

$ echo hello
hello

Now press Ctrl+B followed by C on your keyboard. It might look like your work has vanished, but actually, you've created what tmux calls a window (which can be, admittedly, confusing because you probably also call the terminal you launched a window ). Thanks to tmux, you actually have two windows open, both of which you can see listed in the status bar at the bottom of tmux. You can navigate between these two windows by index number. For instance, press Ctrl+B followed by 0 to go to the initial window:

$ echo hello
hello

Press Ctrl+B followed by 1 to go to the first new window you created.

You can also "walk" through your open windows using Ctrl+B and N (for Next) or P (for Previous).

The tmux trigger and commands More Linux resources The keyboard shortcut Ctrl+B is the tmux trigger. When you press it in a tmux session, it alerts tmux to "listen" for the next key or key combination that follows. All tmux shortcuts, therefore, are prefixed with Ctrl+B .

You can also access a tmux command line and type tmux commands by name. For example, to create a new window the hard way, you can press Ctrl+B followed by : to enter the tmux command line. Type new-window and press Enter to create a new window. This does exactly the same thing as pressing Ctrl+B then C .

Splitting windows into panes

Once you have created more than one window in tmux, it's often useful to see them all in one window. You can split a window horizontally (meaning the split is horizontal, placing one window in a North position and another in a South position) or vertically (with windows located in West and East positions).

You can split windows that have been split, so the layout is up to you and the number of lines in your terminal.

tmux_golden-ratio.jpg

(Seth Kenlon, CC BY-SA 4.0 )

Sometimes things can get out of hand. You can adjust a terminal full of haphazardly split panes using these quick presets:

Switching between panes

To get from one pane to another, press Ctrl+B followed by O (as in other ). The border around the pane changes color based on your position, and your terminal cursor changes to its active state. This method "walks" through panes in order of creation.

Alternatively, you can use your arrow keys to navigate to a pane according to your layout. For example, if you've got two open panes divided by a horizontal split, you can press Ctrl+B followed by the Up arrow to switch from the lower pane to the top pane. Likewise, Ctrl+B followed by the Down arrow switches from the upper pane to the lower one.

Running a command on multiple hosts with tmux

Now that you know how to open many windows and divide them into convenient panes, you know nearly everything you need to know to run one command on multiple hosts at once. Assuming you have a layout you're happy with and each pane is connected to a separate host, you can synchronize the panes such that the input you type on your keyboard is mirrored in all panes.

To synchronize panes, access the tmux command line with Ctrl+B followed by : , and then type setw synchronize-panes .

Now anything you type on your keyboard appears in each pane, and each pane responds accordingly.

Download our cheat sheet

It's relatively easy to remember Ctrl+B to invoke tmux features, but the keys that follow can be difficult to remember at first. All built-in tmux keyboard shortcuts are available by pressing Ctrl+B followed by ? (exit the help screen with Q ). However, the help screen can be a little overwhelming for all its options, none of which are organized by task or topic. To help you remember the basic features of tmux, as well as many advanced functions not covered in this article, we've developed a tmux cheatsheet . It's free to download, so get your copy today.

Download our tmux cheat sheet today! How tmux sparks joy in your Linux terminal Organize your terminal like Marie Kondo with tmux. S. Hayes Use tmux to create the console of your dreams You can do a lot with tmux, especially when you add tmuxinator to the mix. Check them out in the fifteenth in our series on 20 ways to be more productive with open source in 2020. Kevin Sonney (Correspondent) Customizing my Linux terminal with tmux and Git Set up your console so you always know where you are and what to do next. Moshe Zadka (Correspondent) Topics Linux

[Jul 29, 2020] Linux Commands- jobs, bg, and fg by Tyler Carrigan

Jul 23, 2020 | www.redhat.com
Image

Photo by Andrea Piacquadio from Pexels

More Linux resources

In this quick tutorial, I want to look at the jobs command and a few of the ways that we can manipulate the jobs running on our systems. In short, controlling jobs lets you suspend and resume processes started in your Linux shell.

Jobs

The jobs command will list all jobs on the system; active, stopped, or otherwise. Before I explore the command and output, I'll create a job on my system.

I will use the sleep job as it won't change my system in any meaningful way.

[tcarrigan@rhel ~]$ sleep 500
^Z
[1]+  Stopped                 sleep 500

First, I issued the sleep command, and then I received the Job number [1]. I then immediately stopped the job by using Ctl+Z . Next, I run the jobs command to view the newly created job:

[tcarrigan@rhel ~]$ jobs
[1]+  Stopped                 sleep 500

You can see that I have a single stopped job identified by the job number [1] .

Other options to know for this command include:

Background

Next, I'll resume the sleep job in the background. To do this, I use the bg command. Now, the bg command has a pretty simple syntax, as seen here:

bg [JOB_SPEC]

Where JOB_SPEC can be any of the following:

NOTE : bg and fg operate on the current job if no JOB_SPEC is provided.

I can move this job to the background by using the job number [1] .

[tcarrigan@rhel ~]$ bg %1
[1]+ sleep 500 &

You can see now that I have a single running job in the background.

[tcarrigan@rhel ~]$ jobs
[1]+  Running                 sleep 500 &
Foreground

Now, let's look at how to move a background job into the foreground. To do this, I use the fg command. The command syntax is the same for the foreground command as with the background command.

fg [JOB_SPEC]

Refer to the above bullets for details on JOB_SPEC.

I have started a new sleep in the background:

[tcarrigan@rhel ~]$ sleep 500 &
[2] 5599

Now, I'll move it to the foreground by using the following command:

[tcarrigan@rhel ~]$ fg %2
sleep 500

The fg command has now brought my system back into a sleep state.

The end

While I realize that the jobs presented here were trivial, these concepts can be applied to more than just the sleep command. If you run into a situation that requires it, you now have the knowledge to move running or stopped jobs from the foreground to background and back again.

[Jul 29, 2020] 10 Linux commands to know the system - nixCraft

Jul 29, 2020 | www.cyberciti.biz

10 Linux commands to know the system

Open the terminal application and then start typing these commands to know your Linux desktop or cloud server/VM.

1. free – get free and used memory

Are you running out of memory? Use the free command to show the total amount of free and used physical (RAM) and swap memory in the Linux system. It also displays the buffers and caches used by the kernel:
free
# human readable outputs
free -h
# use the cat command to find geeky details
cat /proc/meminfo

Linux display amount of free and used memory in the system
However, the free command will not give information about memory configurations, maximum supported memory by the Linux server , and Linux memory speed . Hence, we must use the dmidecode command:
sudo dmidecode -t memory
Want to determine the amount of video memory under Linux, try:
lspci | grep -i vga
glxinfo | egrep -i 'device|memory'

See " Linux Find Out Video Card GPU Memory RAM Size Using Command Line " and " Linux Check Memory Usage Using the CLI and GUI " for more information.

2. hwinfo – probe for hardware

We can quickly probe for the hardware present in the Linux server or desktop:
# Find detailed info about the Linux box
hwinfo
# Show only a summary #
hwinfo --short
# View all disks #
hwinfo --disk
# Get an overview #
hwinfo --short --block
# Find a particular disk #
hwinfo --disk --only /dev/sda
hwinfo --disk --only /dev/sda
# Try 4 graphics card ports for monitor data #
hwprobe=bios.ddc.ports=4 hwinfo --monitor
# Limit info to specific devices #
hwinfo --short --cpu --disk --listmd --gfxcard --wlan --printer

hwinfo
Alternatively, you may find the lshw command and inxi command useful to display your Linux hardware information:
sudo lshw -short
inxi -Fxz

inxi
inxi is system information tool to get system configurations and hardware. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, gcc version(s), Processes, RAM usage, and a wide variety of other useful information [Click to enlarge]
3. id – know yourself

Display Linux user and group information for the given USER name. If user name omitted show information for the current user:
id

uid=1000(vivek) gid=1000(vivek) groups=1000(vivek),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),115(lpadmin),116(sambashare),998(lxd)

See who is logged on your Linux server:
who
who am i

4. lsblk – list block storage devices

All Linux block devices give buffered access to hardware devices and allow reading and writing blocks as per configuration. Linux block device has names. For example, /dev/nvme0n1 for NVMe and /dev/sda for SCSI devices such as HDD/SSD. But you don't have to remember them. You can list them easily using the following syntax:
lsblk
# list only #
lsblk -l
# filter out loop devices using the grep command #
lsblk -l | grep '^loop'

NAME          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0             9:0    0   3.7G  0 raid1 /boot
md1             9:1    0 949.1G  0 raid1 
md1_crypt     253:0    0 949.1G  0 crypt 
nixcraft-swap 253:1    0 119.2G  0 lvm   [SWAP]
nixcraft-root 253:2    0 829.9G  0 lvm   /
nvme1n1       259:0    0 953.9G  0 disk  
nvme1n1p1     259:1    0   953M  0 part  
nvme1n1p2     259:2    0   3.7G  0 part  
nvme1n1p3     259:3    0 949.2G  0 part  
nvme0n1       259:4    0 953.9G  0 disk  
nvme0n1p1     259:5    0   953M  0 part  /boot/efi
nvme0n1p2     259:6    0   3.7G  0 part  
nvme0n1p3     259:7    0 949.2G  0 part
5. lsb_release – Linux distribution information

Want to get distribution-specific information such as, description of the currently installed distribution, release number and code name:
lsb_release -a
No LSB modules are available.

Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal
6. lscpu – display info about the CPUs

The lscpu command gathers and displays CPU architecture information in an easy-to-read format for humans including various CPU bugs:
lscpu

Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          12
On-line CPU(s) list:             0-11
Thread(s) per core:              2
Core(s) per socket:              6
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           158
Model name:                      Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz
Stepping:                        13
CPU MHz:                         976.324
CPU max MHz:                     4600.0000
CPU min MHz:                     800.0000
BogoMIPS:                        5199.98
Virtualization:                  VT-x
L1d cache:                       192 KiB
L1i cache:                       192 KiB
L2 cache:                        1.5 MiB
L3 cache:                        12 MiB
NUMA node0 CPU(s):               0-11
Vulnerability Itlb multihit:     KVM: Mitigation: Split huge pages
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds:             Mitigation; TSX disabled
Vulnerability Tsx async abort:   Mitigation; TSX disabled
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_g
                                 ood nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes x
                                 save avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep 
                                 bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities

Cpu can be listed using the lshw command too:
sudo lshw -C cpu

7. lstopo – display hardware topology

Want to see the topology of the Linux server or desktop? Try:
lstopo
lstopo-no-graphics

Linux show the topology of the system command
You will see information about:

  1. NUMA memory nodes
  2. shared caches
  3. CPU packages
  4. Processor cores
  5. processor "threads" and more
8. lsusb – list usb devices

We all use USB devices, such as external hard drives and keyboards. Run the NA command for displaying information about USB buses in the Linux system and the devices connected to them.
lsusb
# Want a graphical summary of USB devices connected to the system? #
sudo usbview

usbview
usbview provides a graphical summary of USB devices connected to the system. Detailed information may be displayed by selecting individual devices in the tree display
lspci – list PCI devices

We use the lspci command for displaying information about PCI buses in the system and devices connected to them:
lspci
lspci

9. timedatectl – view current date and time zone

Typically we use the date command to set or get date/time information on the CLI:
date
However, modern Linux distro use the timedatectl command to query and change the system clock and its settings, and enable or disable time synchronization services (NTPD and co):
timedatectl

               Local time: Sun 2020-07-26 16:31:10 IST
           Universal time: Sun 2020-07-26 11:01:10 UTC
                 RTC time: Sun 2020-07-26 11:01:10    
                Time zone: Asia/Kolkata (IST, +0530)  
System clock synchronized: yes                        
              NTP service: active                     
          RTC in local TZ: no
10. w – who is logged in

Run the w command on Linux to see information about the Linux users currently on the machine, and their processes:

$ w

Conclusion

And this concluded our ten Linux commands to know the system to increase your productivity quickly to solve problems. Let me know about your favorite tool in the comment section below.

[Jul 17, 2020] No Masks, No Coughs: Robots Can Be Just What the Doctor Ordered in Time of Social Distancing

July 8, 2020 | www.washingtonpost.com

The Washington Post
Simon Denyer; Akiko Kashiwagi; Min Joo Kim
July 8, 2020

In Japan, a country with a long fascination with robots, automated assistants have offered their services as bartenders, security guards, deliverymen, and more, since the onset of the coronavirus pandemic. Japan's Avatarin developed the "newme" robot to allow people to be present while maintaining social distancing during the pandemic.

The telepresence robot is essentially a tablet on a wheeled stand with the user's face on the screen, whose location and direction can be controlled via laptop or tablet. Doctors have used the newme robot to communicate with patients in a coronavirus ward, while university students in Tokyo used it to remotely attend a graduation ceremony.

The company is working on prototypes that will allow users to control the robot through virtual reality headsets, and gloves that would permit users to lift, touch, and feel objects through a remote robotic hand.

Full Article

[Jul 14, 2020] Important Linux -proc filesystem files you need to know - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

The /proc files I find most valuable, especially for inherited system discovery, are:

And the most valuable of those are cpuinfo and meminfo .

Again, I'm not stating that other files don't have value, but these are the ones I've found that have the most value to me. For example, the /proc/uptime file gives you the system's uptime in seconds. For me, that's not particularly valuable. However, if I want that information, I use the uptime command that also gives me a more readable version of /proc/loadavg as well.

By comparison:

$ cat /proc/uptime
46901.13 46856.69

$ cat /proc/loadavg 
0.00 0.01 0.03 2/111 2039

$ uptime
 00:56:13 up 13:01,  2 users,  load average: 0.00, 0.01, 0.03

I think you get the idea.

/proc/cmdline

This file shows the parameters passed to the kernel at the time it is started.

$ cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1062.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

The value of this information is in how the kernel was booted because any switches or special parameters will be listed here, too. And like all information under /proc , it can be found elsewhere and usually with better formatting, but /proc files are very handy when you can't remember the command or don't want to grep for something.

/proc/cpuinfo

The /proc/cpuinfo file is the first file I check when connecting to a new system. I want to know the CPU make-up of a system and this file tells me everything I need to know.

$ cat /proc/cpuinfo 

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 142
model name      : Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
stepping        : 9
cpu MHz         : 2303.998
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase avx2 invpcid rdseed clflushopt md_clear flush_l1d
bogomips        : 4607.99
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

This is a virtual machine and only has one vCPU. If your system contains more than one CPU, the CPU numbering begins at 0 for the first CPU.

/proc/meminfo

The /proc/meminfo file is the second file I check on a new system. It gives me a general and a specific look at a system's memory allocation and usage.

$ cat /proc/meminfo 
MemTotal:        1014824 kB
MemFree:          643608 kB
MemAvailable:     706648 kB
Buffers:            1072 kB
Cached:           185568 kB
SwapCached:            0 kB
Active:           187568 kB
Inactive:          80092 kB
Active(anon):      81332 kB
Inactive(anon):     6604 kB
Active(file):     106236 kB
Inactive(file):    73488 kB
Unevictable:           0 kB
Mlocked:               0 kB
***Output truncated***

I think most sysadmins either use the free or the top command to pull some of the data contained here. The /proc/meminfo file gives me a quick memory overview that I like and can redirect to another file as a snapshot.

/proc/version

The /proc/version command provides more information than the related uname -a command does. Here are the two compared:

$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019

$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Usually, the uname -a command is sufficient to give you kernel version info but for those of you who are developers or who are ultra-concerned with details, the /proc/version file is there for you.

Wrapping up

The /proc filesystem has a ton of valuable information available to system administrators who want a convenient, non-command way of getting at raw system info. As I stated earlier, there are other ways to display the information in /proc . Additionally, some of the /proc info isn't what you'd want to use for system assessment. For example, use commands such as vmstat 5 5 or iostat 5 5 to get a better picture of system performance rather than reading one of the available /proc files.

[Jul 14, 2020] Sysadmin tales- How to keep calm and not panic when things break - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

Sysadmin tales: How to keep calm and not panic when things break When an incident occurs, resist the urge to freak out. Instead, use these tips to help you keep your cool and find a solution.

Posted: July 10, 2020 | by Glen Newell (Sudoer)

Image

Image by FelixMittermeier from Pixabay

It was a dark and stormy summer afternoon in Denver Career Advice

I was working on several projects simultaneously for a small company that had been carved out of a larger one that had gone out of business. The smaller company had inherited some of the bigger company's infrastructure, and all the headaches along with it. That day, I had some additional consultants working with me on a project to migrate email service from a large proprietary onsite cluster to a cloud provider, while at the same time, I was working on reconfiguring a massive storage array.

At some point, I clicked the wrong button.

All of a sudden, I started getting calls. The CIO and the consultants were standing in front of my desk. The email servers were completely offline -- they responded, but could not access the backing storage. I didn't know it yet, but I had deleted the storage pool for the active email servers.

My vision blurred into a tunnel, and my stomach fell into a bottomless pit. I struggled to breathe. I did my best to maintain a poker face as the executives and consultants watched impatiently. I scanned logs and messages looking for clues. I ran tests on all the components to find the source of the issue and came up with nothing. The data seemed to be gone, and panic was setting in.

I pushed back from the desk and excused myself to use the restroom. Closing and latching the door behind me, I contemplated my fate for a moment, then splashed cold water on my face and took a deep breath. Then it dawned on me: earlier, I had set up an active mirror of that storage pool. The data was all there; I just needed to reconnect it.

I returned to my desk and couldn't help a bit of a smirk. A couple of commands, a couple of clicks, and a sip of coffee. About five minutes of testing, and I could say, "Sorry, guys. Should be good now." The whole thing had happened in about 30 minutes.

We've all been there

Everyone makes mistakes, even the most senior and venerable engineers and systems administrators. We're all human. It just so happens that, as a sysadmin, a small mistake in a moment can cause very visible problems, and, PANIC. This is normal, though. What separates the hero from the unemployed in that moment, can be just a few simple things.

When an incident occurs, focusing on who's at fault can be tempting; blame is something we know how to do and can do something about, and it can even offer some relief if we can tell ourselves it's not our fault. But in fact, blame accomplishes nothing and can be counterproductive in a moment of crisis -- it can distract us from finding a solution to the problem, and create even more stress.

Backups, backups, backups

This is just one of the times when having a backup saved the day for me, and for a client. Every sysadmin I've ever worked with will tell you the same thing -- always have a backup. Do regular backups. Make backups of configurations you are working on. Make a habit of creating a backup as the first step in any project. There are some great articles here on Enable Sysadmin about the various things you can do to protect yourself.

Another good practice is to never work on production systems until you have tested the change. This may not always be possible, but if it is, the extra effort and time will be well worth it for the rare occasions when you have an unexpected result, so you can avoid the panic of wondering where you might have saved your most recent resume. Having a plan and being prepared can go a long way to avoiding those very stressful situations.

Breathe in, breathe out

The panic response in humans is related to the "fight or flight" reflex, which served our ancestors so well. It's a really useful resource for avoiding saber tooth tigers (and angry CFOs), but not so much for understanding and solving complex technical problems. Understanding that it's normal but not really helpful, we can recognize it and find a way to overcome it in the moment.

The simplest way we can tame the impulse to blackout and flee is to take a deep breath (or several). Studies have shown that simple breathing exercises and meditation can improve our general outlook and ability to focus on a specific task. There is also evidence that temperature changes can make a difference; something as simple as a splash of water on the face or an ice-cold beverage can calm a panic. These things work for me.

Walk the path of troubleshooting, one step at a time

Once we have convinced ourselves that the world is not going to end immediately, we can focus on solving the problem. Take the situation one element, one step at a time to find what went wrong, then take that and apply the solution(s) systematically. Again, it's important to focus on the problem and solution in front of you rather than worrying about things you can't do anything about right now or what might happen later. Remember, blame is not helpful, and that includes blaming yourself.

Most often, when I focus on the problem, I find that I forget to panic, and I can do even better work on the solution. Many times, I have found solutions I wouldn't have seen or thought of otherwise in this state.

Take five

Another thing that's easy to forget is that, when you've been working on a problem, it's important to give yourself a break. Drink some water. Take a short walk. Rest your brain for a couple of minutes. Hunger, thirst, and fatigue can lead to less clear thinking and, you guessed it, panic.

Time to face the music

My last piece of advice -- though certainly not the least important -- is, if you are responsible for an incident, be honest about what happened. This will benefit you for both the short and long term.

During the early years of the space program, the directors and engineers at NASA established a routine of getting together and going over what went wrong and what and how to improve for the next time. The same thing happens in the military, emergency management, and healthcare fields. It's also considered good agile/DevOps practice. Some of the smartest, highest-strung engineers, administrators, and managers I've known and worked with -- people with millions of dollars and thousands of lives in their area of responsibility -- have insisted on the importance of learning lessons from mistakes and incidents. It's a mark of a true professional to own up to mistakes and work to improve.

It's hard to lose face, but not only will your colleagues appreciate you taking responsibility and working to improve the team, but I promise you will rest better and be able to manage the next problem better if you look at these situations as learning opportunities.

Accidents and mistakes can't ever be avoided entirely, but hopefully, you will find some of this advice useful the next time you face an unexpected challenge.

[ Want to test your sysadmin skills? Take a skills assessment today . ]

[Jul 14, 2020] Linux stories- When backups saved the day - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

I set up a backup approach that software vendors refer to as instant restore, shadow restore, preemptive restore, or similar term. We ran incremental backup jobs every hour and restored the backups in the background to a new virtual machine. Each full hour, we had a system ready that was four hours back in time and just needed to be finished. So if I choose to restore the incremental from one hour ago, it would take less time than a complete system restore because only the small increments had to be restored to the almost-ready virtual machine.

And the effort paid off

One day, I was on vacation, having a barbecue and some beer, when I got a call from my colleague telling me that the terminal server with the ERP application was broken due to a failed update and the guy who ran the update forgot to take a snapshot first.

The only thing I needed to tell my colleague was to shut down the broken machine, find the UI of our backup/restore system, and then identify the restore job. Finally, I told him how to choose the timestamp from the last four hours when the restore should finish. The restore finished 30 minutes later, and the system was ready to be used again. We were back in action after a total of 30 minutes, and only the work from the last two hours or so was lost! Awesome! Now, back to vacation.

[Jul 12, 2020] 6 handy Bash scripts for Git - Opensource.com

Jul 12, 2020 | opensource.com

6 handy Bash scripts for Git These six Bash scripts will make your life easier when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86 up 2 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More on Git

I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want. 1. gitlog

gitlog prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^ , 2 for HEAD~2, and so forth. For example:

$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

If I want to see what patches are on a different branch, I can specify an alternate branch:

$ gitlog recovery24
2. gitlog.id

gitlog.id just prints the patch SHA1 IDs:

$ gitlog.id
-----------------------[ recovery25 ]-----------------------
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9

Again, it assumes the current branch, but I can specify a different branch if I want.

3. gitlog.id2

gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:

$ # create a new branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep

gitlog.grep greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function inode_go_sync , I simply do:

$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10 to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary adjustments.

5. gitbranchcmp3

gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool Kompare , which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints [SAME] . If there are only comment differences, it prints [same] (in lower case). For example:

$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches

(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time

Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find

Finally, I have gitlog.find , a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:

$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh

The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:

Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The gitlog.find script is designed to locate GFS2 and DLM patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.

Source code

Here is the source for these scripts.

1. gitlog #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done

if [[ $branch =~ . * for-next. * ]]
then
start =HEAD
# start=origin/for-next
else
start =origin / master
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s %n" $i
patches =$ ( echo $patches - 1 | bc )
done
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch 2. gitlog.id #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 3. gitlog.id2 #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 4. gitlog.grep #!/bin/bash
param1 = $1
param2 = $2

if test "x $param2 " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
string = $param1
else
branch = $param1
string = $param2
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
/ usr / bin / git show --pretty =email --patch-with-stat $i | grep -n " $string "
patches =$ ( echo $patches - 1 | bc )
done 5. gitbranchcmp3 #!/bin/bash
#
# gitbranchcmp3 <old branch> [<new_branch>]
#
oldbranch = $1
newbranch = $2
script = / tmp / compare_mismatches.sh

/ usr / bin / rm -f $script
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

if test "x $newbranch " = x; then
newbranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

declare -a oldsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $oldbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )
declare -a newsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $newbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )

#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done

#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo

for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done

echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done

echo "The missing: " $the_missing
echo "Compare script generated at: $script "
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' ' 6. gitlog.find #!/bin/bash
#
# Find the upstream equivalent patch
#
# gitlog.find
#
cwd = $PWD
param1 = $1
ubranch = $2
patches = 0
script = / tmp / compare_upstream.sh
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " cwd= $PWD " >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " cd ~/linux.git/fs/gfs2" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " cd $cwd " >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

#echo "Gathering upstream patch info. Please wait."
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

cd ~ / linux.git
if test "X ${ubranch} " = "X" ; then
ubranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
utracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
#
# gather a list of gfs2 patches from master just in case we can't find it
#
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / gfs2 / > / tmp / gronk.gfs2
# ms = in Linus's master
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / dlm / > / tmp / gronk.dlm

cd $cwd
LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
desc = `/ usr / bin / git show --abbrev-commit -s --pretty =format: "%s" $i `
cd ~ / linux.git
cmp = 1
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "lo %h %<|(32)%an %s" $utracking .. $ubranch | grep " $desc " `
# lo = in local for-next
if test "X $up_eq " = "X" ; then
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "fn %h %<|(32)%an %s" master.. $utracking | grep " $desc " `
# fn = in for-next for next merge window
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.gfs2 `
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.dlm `
if test "X $up_eq " = "X" ; then
up_eq = " Not found upstream"
cmp = 0
fi
fi
fi
fi
echo " $up_eq "
if [ $cmp -eq 1 ] ; then
UP_SHA1 = ` echo $up_eq | cut -d ' ' -f2 `
echo "compare_them $UP_SHA1 $i " >> $script
fi
cd $cwd
patches =$ ( echo $patches - 1 | bc )
done
echo "Compare script generated: $script "

[Jul 12, 2020] How to add a Help facility to your Bash program - Opensource.com

Jul 12, 2020 | opensource.com

How to add a Help facility to your Bash program In the third article in this series, learn about using functions as you create a simple Help facility for your Bash script. 20 Dec 2019 David Both (Correspondent) Feed 53 up Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs. In the second article , you began the task of creating a fairly simple template that you can use as a starting point for other Bash programs, then explored ways to test it.

This third of the four articles in this series explains how to create and use a simple Help function. While creating your Help facility, you will also learn about using functions and how to handle command-line options such as -h .

Why Help? More on Bash Even fairly simple Bash programs should have some sort of Help facility, even if it is fairly rudimentary. Many of the Bash shell programs I write are used so infrequently that I forget the exact syntax of the command I need. Others are so complex that I need to review the options and arguments even when I use them frequently.

Having a built-in Help function allows you to view those things without having to inspect the code itself. A good and complete Help facility is also a part of program documentation.

About functions

Shell functions are lists of Bash program statements that are stored in the shell's environment and can be executed, like any other command, by typing their name at the command line. Shell functions may also be known as procedures or subroutines, depending upon which other programming language you are using.

Functions are called in scripts or from the command-line interface (CLI) by using their names, just as you would for any other command. In a CLI program or a script, the commands in the function execute when they are called, then the program flow sequence returns to the calling entity, and the next series of program statements in that entity executes.

The syntax of a function is:

FunctionName(){program statements}

Explore this by creating a simple function at the CLI. (The function is stored in the shell environment for the shell instance in which it is created.) You are going to create a function called hw , which stands for "hello world." Enter the following code at the CLI and press Enter . Then enter hw as you would any other shell command:

[ student @ testvm1 ~ ] $ hw (){ echo "Hi there kiddo" ; }
[ student @ testvm1 ~ ] $ hw
Hi there kiddo
[ student @ testvm1 ~ ] $

OK, so I am a little tired of the standard "Hello world" starter. Now, list all of the currently defined functions. There are a lot of them, so I am showing just the new hw function. When it is called from the command line or within a program, a function performs its programmed task and then exits and returns control to the calling entity, the command line, or the next Bash program statement in a script after the calling statement:

[ student @ testvm1 ~ ] $ declare -f | less
< snip >
hw ()
{
echo "Hi there kiddo"
}
< snip >

Remove that function because you do not need it anymore. You can do that with the unset command:

[ student @ testvm1 ~ ] $ unset -f hw ; hw
bash: hw: command not found
[ student @ testvm1 ~ ] $ Creating the Help function

Open the hello program in an editor and add the Help function below to the hello program code after the copyright statement but before the echo "Hello world!" statement. This Help function will display a short description of the program, a syntax diagram, and short descriptions of the available options. Add a call to the Help function to test it and some comment lines that provide a visual demarcation between the functions and the main portion of the program:

################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################

Help
echo "Hello world!"

The options described in this Help function are typical for the programs I write, although none are in the code yet. Run the program to test it:

[ student @ testvm1 ~ ] $ . / hello
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

Hello world !
[ student @ testvm1 ~ ] $

Because you have not added any logic to display Help only when you need it, the program will always display the Help. Since the function is working correctly, read on to add some logic to display the Help only when the -h option is used when you invoke the program at the command line.

Handling options

A Bash script's ability to handle command-line options such as -h gives some powerful capabilities to direct the program and modify what it does. In the case of the -h option, you want the program to print the Help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command (see How to program with Bash: Loops to learn more about while ) in conjunction with the getops and case commands.

The getops command reads any and all options specified at the command line and creates a list of those options. In the code below, the while command loops through the list of options by setting the variable $options for each. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to evaluate the list of options until they have all been processed or it encounters an exit statement, which terminates the program.

Be sure to delete the Help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this:

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
esac
done

echo "Hello world!"

Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option added to this case statement to delineate the end of each option.

Testing

Testing is now a little more complex. You need to test your program with a number of different options -- and no options -- to see how it responds. First, test with no options to ensure that it prints "Hello world!" as it should:

[ student @ testvm1 ~ ] $ . / hello
Hello world !

That works, so now test the logic that displays the Help text:

[ student @ testvm1 ~ ] $ . / hello -h
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

That works as expected, so try some testing to see what happens when you enter some unexpected options:

[ student @ testvm1 ~ ] $ . / hello -x
Hello world !
[ student @ testvm1 ~ ] $ . / hello -q
Hello world !
[ student @ testvm1 ~ ] $ . / hello -lkjsahdf
Add description of the script functions here.

Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.

[ student @ testvm1 ~ ] $

The program just ignores any options without specific responses without generating any errors. But notice the last entry (with -lkjsahdf for options): because there is an h in the list of options, the program recognizes it and prints the Help text. This testing has shown that the program doesn't have the ability to handle incorrect input and terminate the program if any is detected.

You can add another case stanza to the case statement to match any option that doesn't have an explicit match. This general case will match anything you have not provided a specific match for. The case statement now looks like this, with the catch-all match of \? as the last case. Any additional specific cases must precede this final one:

while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done

Test the program again using the same options as before and see how it works now.

Where you are

You have accomplished a good amount in this article by adding the capability to process command-line options and a Help procedure. Your Bash script now looks like this:

#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# LinuxGeek46@both.org #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################

################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|t|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}

################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
done

echo "Hello world!"

Be sure to test this version of the program very thoroughly. Use random inputs and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.

Next time

In this article, you added a Help function as well as the ability to process command-line options to display it selectively. The program is getting a little more complex, so testing is becoming more important and requires more test paths in order to be complete.

The next article will look at initializing variables and doing a bit of sanity checking to ensure that the program will run under the correct set of conditions.

[Jul 12, 2020] Navigating the Bash shell with pushd and popd - Opensource.com

Notable quotes:
"... directory stack ..."
Jul 12, 2020 | opensource.com

Navigating the Bash shell with pushd and popd Pushd and popd are the fastest navigational commands you've never heard of. 07 Aug 2019 Seth Kenlon (Red Hat) Feed 71 up 7 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

The pushd and popd commands are built-in features of the Bash shell to help you "bookmark" directories for quick navigation between locations on your hard drive. You might already feel that the terminal is an impossibly fast way to navigate your computer; in just a few key presses, you can go anywhere on your hard drive, attached storage, or network share. But that speed can break down when you find yourself going back and forth between directories, or when you get "lost" within your filesystem. Those are precisely the problems pushd and popd can help you solve.

pushd

At its most basic, pushd is a lot like cd . It takes you from one directory to another. Assume you have a directory called one , which contains a subdirectory called two , which contains a subdirectory called three , and so on. If your current working directory is one , then you can move to two or three or anywhere with the cd command:

$ pwd
one
$ cd two / three
$ pwd
three

You can do the same with pushd :

$ pwd
one
$ pushd two / three
~ / one / two / three ~ / one
$ pwd
three

The end result of pushd is the same as cd , but there's an additional intermediate result: pushd echos your destination directory and your point of origin. This is your directory stack , and it is what makes pushd unique.

Stacks

A stack, in computer terminology, refers to a collection of elements. In the context of this command, the elements are directories you have recently visited by using the pushd command. You can think of it as a history or a breadcrumb trail.

You can move all over your filesystem with pushd ; each time, your previous and new locations are added to the stack:

$ pushd four
~ / one / two / three / four ~ / one / two / three ~ / one
$ pushd five
~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three ~ / one Navigating the stack

Once you've built up a stack, you can use it as a collection of bookmarks or fast-travel waypoints. For instance, assume that during a session you're doing a lot of work within the ~/one/two/three/four/five directory structure of this example. You know you've been to one recently, but you can't remember where it's located in your pushd stack. You can view your stack with the +0 (that's a plus sign followed by a zero) argument, which tells pushd not to change to any directory in your stack, but also prompts pushd to echo your current stack:

$ pushd + 0
~ / one / two / three / four ~ / one / two / three ~ / one ~ / one / two / three / four / five

Alternatively, you can view the stack with the dirs command, and you can see the index number for each directory by using the -v option:

$ dirs -v
0 ~ / one / two / three / four
1 ~ / one / two / three
2 ~ / one
3 ~ / one / two / three / four / five

The first entry in your stack is your current location. You can confirm that with pwd as usual:

$ pwd
~ / one / two / three / four

Starting at 0 (your current location and the first entry of your stack), the second element in your stack is ~/one , which is your desired destination. You can move forward in your stack using the +2 option:

$ pushd + 2
~ / one ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three
$ pwd
~ / one

This changes your working directory to ~/one and also has shifted the stack so that your new location is at the front.

You can also move backward in your stack. For instance, to quickly get to ~/one/two/three given the example output, you can move back by one, keeping in mind that pushd starts with 0:

$ pushd -0
~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four Adding to the stack

You can continue to navigate your stack in this way, and it will remain a static listing of your recently visited directories. If you want to add a directory, just provide the directory's path. If a directory is new to the stack, it's added to the list just as you'd expect:

$ pushd / tmp
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

But if it already exists in the stack, it's added a second time:

$ pushd ~ / one
~ / one / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four

While the stack is often used as a list of directories you want quick access to, it is really a true history of where you've been. If you don't want a directory added redundantly to the stack, you must use the +N and -N notation.

Removing directories from the stack

Your stack is, obviously, not immutable. You can add to it with pushd or remove items from it with popd .

For instance, assume you have just used pushd to add ~/one to your stack, making ~/one your current working directory. To remove the first (or "zeroeth," if you prefer) element:

$ pwd
~ / one
$ popd + 0
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd
~ / one

Of course, you can remove any element, starting your count at 0:

$ pwd ~ / one
$ popd + 2
/ tmp ~ / one / two / three ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd ~ / one

You can also use popd from the back of your stack, again starting with 0. For example, to remove the final directory from your stack:

$ popd -0
/ tmp ~ / one / two / three ~ / one / two / three / four / five

When used like this, popd does not change your working directory. It only manipulates your stack.

Navigating with popd

The default behavior of popd , given no arguments, is to remove the first (zeroeth) item from your stack and make the next item your current working directory.

This is most useful as a quick-change command, when you are, for instance, working in two different directories and just need to duck away for a moment to some other location. You don't have to think about your directory stack if you don't need an elaborate history:

$ pwd
~ / one
$ pushd ~ / one / two / three / four / five
$ popd
$ pwd
~ / one

You're also not required to use pushd and popd in rapid succession. If you use pushd to visit a different location, then get distracted for three hours chasing down a bug or doing research, you'll find your directory stack patiently waiting (unless you've ended your terminal session):

$ pwd ~ / one
$ pushd / tmp
$ cd { / etc, / var, / usr } ; sleep 2001
[ ... ]
$ popd
$ pwd
~ / one Pushd and popd in the real world

The pushd and popd commands are surprisingly useful. Once you learn them, you'll find excuses to put them to good use, and you'll get familiar with the concept of the directory stack. Getting comfortable with pushd was what helped me understand git stash , which is entirely unrelated to pushd but similar in conceptual intangibility.

Using pushd and popd in shell scripts can be tempting, but generally, it's probably best to avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're re-reading a script ( pushd +3 is less clear than cd $HOME/$DIR/$TMP or similar).

Aside from these warnings, if you're a regular Bash or Zsh user, then you can and should try pushd and popd . Bash prompt tips and tricks Here are a few hidden treasures you can use to customize your Bash prompt. Dave Neary (Red Hat) Topics Bash Linux Command line About the author Seth Kenlon - Seth Kenlon is an independent multimedia artist, free culture advocate, and UNIX geek. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project, http://slackermedia.info More about me Recommended reading
Add videos as wallpaper on your Linux desktop

Use systemd timers instead of cronjobs

Why I stick with xterm

Customizing my Linux terminal with tmux and Git

Back up your phone's storage with this Linux utility

Read and write data from anywhere with redirection in the Linux terminal
7 Comments


matt on 07 Aug 2019

Thank you for the write up for pushd and popd. I gotta remember to use these when I'm jumping around directories a lot. I got a hung up on a pushd example because my development work using arrays differentiates between the index and the count. In my experience, a zero-based array of A, B, C; C has an index of 2 and also is the third element. C would not be considered the second element cause that would be confusing it's index and it's count.

Seth Kenlon on 07 Aug 2019

Interesting point, Matt. The difference between count and index had not occurred to me, but I'll try to internalise it. It's a great distinction, so thanks for bringing it up!

Greg Pittman on 07 Aug 2019

This looks like a recipe for confusing myself.

Seth Kenlon on 07 Aug 2019

It can be, but start out simple: use pushd to change to one directory, and then use popd to go back to the original. Sort of a single-use bookmark system.

Then, once you're comfortable with pushd and popd, branch out and delve into the stack.

A tcsh shell I used at an old job didn't have pushd and popd, so I used to have functions in my .cshrc to mimic just the back-and-forth use.

Jake on 07 Aug 2019

"dirs" can be also used to view the stack. "dirs -v" helpfully numbers each directory with its index.

Seth Kenlon on 07 Aug 2019

Thanks for that tip, Jake. I arguably should have included that in the article, but I wanted to try to stay focused on just the two {push,pop}d commands. Didn't occur to me to casually mention one use of dirs as you have here, so I've added it for posterity.

There's so much in the Bash man and info pages to talk about!

other_Stu on 11 Aug 2019

I use "pushd ." (dot for current directory) quite often. Like a working directory bookmark when you are several subdirectories deep somewhere, and need to cd to couple of other places to do some work or check something.
And you can use the cd command with your DIRSTACK as well, thanks to tilde expansion.
cd ~+3 will take you to the same directory as pushd +3 would.

[Jul 12, 2020] An introduction to parameter expansion in Bash - Opensource.com

Jul 12, 2020 | opensource.com

An introduction to parameter expansion in Bash Get started with this quick how-to guide on expansion modifiers that transform Bash variables and other parameters into powerful tools beyond simple value stores. 13 Jun 2017 James Pannacciulli Feed 366 up 4 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

In Bash, entities that store values are known as parameters. Their values can be strings or arrays with regular syntax, or they can be integers or associative arrays when special attributes are set with the declare built-in. There are three types of parameters: positional parameters, special parameters, and variables.

More Linux resources

For the sake of brevity, this article will focus on a few classes of expansion methods available for string variables, though these methods apply equally to other types of parameters.

Variable assignment and unadulterated expansion

When assigning a variable, its name must be comprised solely of alphanumeric and underscore characters, and it may not begin with a numeral. There may be no spaces around the equal sign; the name must immediately precede it and the value immediately follow:

$ variable_1="my content"

Storing a value in a variable is only useful if we recall that value later; in Bash, substituting a parameter reference with its value is called expansion. To expand a parameter, simply precede the name with the $ character, optionally enclosing the name in braces:

$ echo $variable_1 ${variable_1} my content my content

Crucially, as shown in the above example, expansion occurs before the command is called, so the command never sees the variable name, only the text passed to it as an argument that resulted from the expansion. Furthermore, parameter expansion occurs before word splitting; if the result of expansion contains spaces, the expansion should be quoted to preserve parameter integrity, if desired:

$ printf "%s\n" ${variable_1} my content $ printf "%s\n" "${variable_1}" my content
Parameter expansion modifiers

Parameter expansion goes well beyond simple interpolation, however. Inside the braces of a parameter expansion, certain operators, along with their arguments, may be placed after the name, before the closing brace. These operators may invoke conditional, subset, substring, substitution, indirection, prefix listing, element counting, and case modification expansion methods, modifying the result of the expansion. With the exception of the reassignment operators ( = and := ), these operators only affect the expansion of the parameter without modifying the parameter's value for subsequent expansions.

About conditional, substring, and substitution parameter expansion operators Conditional parameter expansion

Conditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions -- each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.

parameter expansion unset var var="" var="gnu"
${var-default} default -- gnu
${var:-default} default default gnu
${var+alternate} -- alternate alternate
${var:+alternate} -- -- alternate
${var?error} error -- gnu
${var:?error} error error gnu

The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.

As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:

$ echo ${EDITOR} /usr/bin/vi $ echo ${EDITOR:-$(which nano)} /usr/bin/vi $ unset EDITOR $ echo ${EDITOR:-$(which nano)} /usr/bin/nano

Building on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:

$ ${EDITOR:-$(which nano)} ${OUT_FILE:?Missing filename} bash: OUT_FILE: Missing filename
Substring parameter expansion

Parameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:

$ location="CA 90095" $ echo "Zip Code: ${location:3}" Zip Code: 90095 $ echo "Zip Code: ${location:(-5)}" Zip Code: 90095 $ echo "State: ${location:0:2}" State: CA

Another way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and % operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.

var="open source"
parameter expansion offset of 5
length of 4
${var:offset} source
${var:offset:length} sour
pattern of *o?
${var#pattern} en source
${var##pattern} rce
pattern of ?e*
${var%pattern} open sour
${var%pattern} o

The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. [[:alnum:]] . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:

$ echo $PATH /usr/local/bin:/usr/bin:/bin $ echo "Lowest priority in PATH: ${PATH##*:}" Lowest priority in PATH: /bin $ echo "Everything except lowest priority: ${PATH%:*}" Everything except lowest priority: /usr/local/bin:/usr/bin $ echo "Highest priority in PATH: ${PATH%:*}" Highest priority in PATH: /usr/local/bin
Substitution in parameter expansion

The same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.

var="free and open"
parameter expansion pattern of [[:space:]]
string of _
${var/pattern/string} free_and open
${var//pattern/string} free_and_open

The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions. Topics Linux About the author James Pannacciulli - James Pannacciulli is an advocate for software freedom & user autonomy with an MA in Linguistics. Employed as a Systems Engineer in Los Angeles, in his free time he occasionally gives talks on bash usage at various conferences. James likes his beers sour and his nettles stinging. More from James may be found on his home page . He has presented at conferences including SCALE ,...

[Jul 12, 2020] A sysadmin's guide to Bash by Maxim Burgerhout

Jul 12, 2020 | opensource.com

Use aliases

... ... ...

Make your root prompt stand out

... ... ...

Control your history

You probably know that when you press the Up arrow key in Bash, you can see and reuse all (well, many) of your previous commands. That is because those commands have been saved to a file called .bash_history in your home directory. That history file comes with a bunch of settings and commands that can be very useful.

First, you can view your entire recent command history by typing history , or you can limit it to your last 30 commands by typing history 30 . But that's pretty vanilla. You have more control over what Bash saves and how it saves it.

For example, if you add the following to your .bashrc, any commands that start with a space will not be saved to the history list:

HISTCONTROL=ignorespace

This can be useful if you need to pass a password to a command in plaintext. (Yes, that is horrible, but it still happens.)

If you don't want a frequently executed command to show up in your history, use:

HISTCONTROL=ignorespace:erasedups

With this, every time you use a command, all its previous occurrences are removed from the history file, and only the last invocation is saved to your history list.

A history setting I particularly like is the HISTTIMEFORMAT setting. This will prepend all entries in your history file with a timestamp. For example, I use:

HISTTIMEFORMAT="%F %T  "

When I type history 5 , I get nice, complete information, like this:

1009 2018 -06- 11 22 : 34 : 38 cat / etc / hosts
1010 2018 -06- 11 22 : 34 : 40 echo $foo
1011 2018 -06- 11 22 : 34 : 42 echo $bar
1012 2018 -06- 11 22 : 34 : 44 ssh myhost
1013 2018 -06- 11 22 : 34 : 55 vim .bashrc

That makes it a lot easier to browse my command history and find the one I used two days ago to set up an SSH tunnel to my home lab (which I forget again, and again, and again ).

Best Bash practices

I'll wrap this up with my top 11 list of the best (or good, at least; I don't claim omniscience) practices when writing Bash scripts.

  1. Bash scripts can become complicated and comments are cheap. If you wonder whether to add a comment, add a comment. If you return after the weekend and have to spend time figuring out what you were trying to do last Friday, you forgot to add a comment.

  1. Wrap all your variable names in curly braces, like ${myvariable} . Making this a habit makes things like ${variable}_suffix possible and improves consistency throughout your scripts.
  1. Do not use backticks when evaluating an expression; use the $() syntax instead. So use:
    for  file in $(ls); do
    
    not
    for  file in `ls`; do
    
    The former option is nestable, more easily readable, and keeps the general sysadmin population happy. Do not use backticks.
  1. Consistency is good. Pick one style of doing things and stick with it throughout your script. Obviously, I would prefer if people picked the $() syntax over backticks and wrapped their variables in curly braces. I would prefer it if people used two or four spaces -- not tabs -- to indent, but even if you choose to do it wrong, do it wrong consistently.
  1. Use the proper shebang for a Bash script. As I'm writing Bash scripts with the intention of only executing them with Bash, I most often use #!/usr/bin/bash as my shebang. Do not use #!/bin/sh or #!/usr/bin/sh . Your script will execute, but it'll run in compatibility mode -- potentially with lots of unintended side effects. (Unless, of course, compatibility mode is what you want.)
  1. When comparing strings, it's a good idea to quote your variables in if-statements, because if your variable is empty, Bash will throw an error for lines like these: if [ ${myvar} == "foo" ] ; then
    echo "bar"
    fi And will evaluate to false for a line like this: if [ " ${myvar} " == "foo" ] ; then
    echo "bar"
    fi Also, if you are unsure about the contents of a variable (e.g., when you are parsing user input), quote your variables to prevent interpretation of some special characters and make sure the variable is considered a single word, even if it contains whitespace.
  1. This is a matter of taste, I guess, but I prefer using the double equals sign ( == ) even when comparing strings in Bash. It's a matter of consistency, and even though -- for string comparisons only -- a single equals sign will work, my mind immediately goes "single equals is an assignment operator!"
  1. Use proper exit codes. Make sure that if your script fails to do something, you present the user with a written failure message (preferably with a way to fix the problem) and send a non-zero exit code: # we have failed
    echo "Process has failed to complete, you need to manually restart the whatchamacallit"
    exit 1 This makes it easier to programmatically call your script from yet another script and verify its successful completion.
  1. Use Bash's built-in mechanisms to provide sane defaults for your variables or throw errors if variables you expect to be defined are not defined: # this sets the value of $myvar to redhat, and prints 'redhat'
    echo ${myvar:=redhat} # this throws an error reading 'The variable myvar is undefined, dear reader' if $myvar is undefined
    ${myvar:?The variable myvar is undefined, dear reader}
  1. Especially if you are writing a large script, and especially if you work on that large script with others, consider using the local keyword when defining variables inside functions. The local keyword will create a local variable, that is one that's visible only within that function. This limits the possibility of clashing variables.
  1. Every sysadmin must do it sometimes: debug something on a console, either a real one in a data center or a virtual one through a virtualization platform. If you have to debug a script that way, you will thank yourself for remembering this: Do not make the lines in your scripts too long!

    On many systems, the default width of a console is still 80 characters. If you need to debug a script on a console and that script has very long lines, you'll be a sad panda. Besides, a script with shorter lines -- the default is still 80 characters -- is a lot easier to read and understand in a normal editor, too!


I truly love Bash. I can spend hours writing about it or exchanging nice tricks with fellow enthusiasts. Make sure you drop your favorites in the comments!

[Jul 12, 2020] My favorite Bash hacks

Jan 09, 2020 | opensource.com

Get the highlights in your inbox every week.

When you work with computers all day, it's fantastic to find repeatable commands and tag them for easy use later on. They all sit there, tucked away in ~/.bashrc (or ~/.zshrc for Zsh users ), waiting to help improve your day!

In this article, I share some of my favorite of these helper commands for things I forget a lot, in hopes that they will save you, too, some heartache over time.

Say when it's over

When I'm using longer-running commands, I often multitask and then have to go back and check if the action has completed. But not anymore, with this helpful invocation of say (this is on MacOS; change for your local equivalent):

function looooooooong {
START=$(date +%s.%N)
$*
EXIT_CODE=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
result="$1 completed in $RES, exit code $EXIT_CODE."
echo -e "\n⏰ $result"
( say -r 250 $result 2>&1 > /dev/null & )
}

This command marks the start and end time of a command, calculates the minutes it takes, and speaks the command invoked, the time taken, and the exit code. I find this super helpful when a simple console bell just won't do.

... ... ...

There are many Docker commands, but there are even more docker compose commands. I used to forget the --rm flags, but not anymore with these useful aliases:

alias dc = "docker-compose"
alias dcr = "docker-compose run --rm"
alias dcb = "docker-compose run --rm --build" gcurl helper for Google Cloud

This one is relatively new to me, but it's heavily documented . gcurl is an alias to ensure you get all the correct flags when using local curl commands with authentication headers when working with Google Cloud APIs.

Git and ~/.gitignore

I work a lot in Git, so I have a special section dedicated to Git helpers.

One of my most useful helpers is one I use to clone GitHub repos. Instead of having to run:

git clone git@github.com:org/repo /Users/glasnt/git/org/repo

I set up a clone function:

clone(){
echo Cloning $1 to ~/git/$1
cd ~/git
git clone git@github.com:$1 $1
cd $1
}

... ... ...

[Jul 11, 2020] This MIT robot combats COVID-19 and may soon be in your grocery store

This is essentially revamped robotic vacuum clener.
Jul 11, 2020 | finance.yahoo.com

A robot that neutralizes aerosolized forms of the coronavirus could soon be coming to a supermarket near you. MIT's Computer Science and Artificial Intelligence Laboratory team partnered with Ava Robotics to develop a device that can kill roughly 90% of COVID-19 on surfaces in a 4,000-square-foot space in 30 minutes.

"This is such an exciting idea to use the solution as a hands-free, safe way to neutralize dorms, hallways, hospitals, airports -- even airplanes," Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at MIT, told Yahoo Finance's "The Ticker."

The key to disinfecting large spaces in a short amount of time is the UV-C light fixture designed at MIT . It uses short-wavelength ultraviolet light that eliminates microorganisms by breaking down their DNA. The UV-C light beam is attached to Ava Robotic's mobile base and can navigate a warehouse in a similar way as a self-driving car.

"The robot is controlled by some powerful algorithms that compute exactly where the robot has to go and how long it has to stay in order to neutralize the germs that exist in that particular part of the space," Rus said.

This robot can kill roughly 90% of COVID-19 on surfaces in a 4,000 square foot space in 30 minutes. (Courtesy: Alyssa Pierson, MIT CSAIL)
More

Currently, the robot is being tested at the Greater Boston Food Bank's shipping area and focuses on sanitizing products leaving the stockroom to reduce any potential threat of spreading the coronavirus into the community.

"Here, there was a unique opportunity to provide additional disinfecting power to their current workflow, and help reduce the risks of COVID-19 exposure," said Alyssa Pierson, CSAIL research scientist and technical lead of the UV-C lamp assembly.

But Rus explains implementing the robot in other locations does face some challenges. "The light emitted by the robot is dangerous to humans, so the robot cannot be in the same space as humans. Or, if people are around the robot, they have to wear protective gear," she added.

While Rus didn't provide a specific price tag, she said the cost of the robot is still high, which may be a hurdle for broad distribution. In the future, "Maybe you don't need to buy an entire robot set, you can book the robot for a few hours a day to take care of your space," she said.

McKenzie Stratigopoulos is a producer at Yahoo Finance. Follow her on Twitter: @mckenziestrat

[Jul 11, 2020] Own your own content Vallard's Blog

Jul 11, 2020 | benincosa.com

Posted on December 31, 2019 by Vallard

Reading this morning on Hacker News was this article on how the old Internet has died because we trusted all our content to Facebook and Google. While hyperbole abounds in the headline and there are plenty of internet things out there that aren't owned by Google nor Facebook (including this AWS free blog) it is true much of the information and content is in the hands of a giant Ad serving service and a social echo chamber. (well that is probably too harsh)

I heard this advice many years ago that you should own your own content. While there isn't much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make changes.

But of course, that is silly! Why not publish on Medium like everyone else? Or publish on someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to me, the point is freedom. Freedom to express, do what I want, say what I will with no restrictions. The ability to own what I say and freedom from others monetizing me directly. There's no walled garden and anyone can access the content I write in my own little funzone.

While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy. In the next decade, whether this blog remains up or is shut down, is not dependent upon the fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or not. If I change my views, I can delete it. It won't just sit on the Internet because someone else's terms of service agreement changed. I am in control, I am in charge. That to me is important and the reason I run this blog, don't use other people's services, and why I advocate for owning your own content.

[Jul 10, 2020] I-O reporting from the Linux command line by Tyler Carrigan

Jul 10, 2020 | www.redhat.com

I/O reporting from the Linux command line Learn the iostat tool, its common command-line flags and options, and how to use it to better understand input/output performance in Linux.

Posted: July 9, 2020 | by Tyler Carrigan (Red Hat)

Image

Image by Pexels

More Linux resources

If you have followed my posts here at Enable Sysadmin, you know that I previously worked as a storage support engineer. One of my many tasks in that role was to help customers replicate backups from their production environments to dedicated backup storage arrays. Many times, customers would contact me concerned about the speed of the data transfer from production to storage.

Now, if you have ever worked in support, you know that there can be many causes for a symptom. However, the throughput of a system can have huge implications for massive data transfers. If all is well, we are talking hours, if not... I have seen a single replication job take months.

We know that Linux is loaded full of helpful tools for all manner of issues. For input/output monitoring, we use the iostat command. iostat is a part of the sysstat package and is not loaded on all distributions by default.

Installation and base run

I am using Red Hat Enterprise Linux 8 here and have included the install output below.

[ Want to try out Red Hat Enterprise Linux? Download it now for free. ]

NOTE : the command runs automatically after installation.

[root@rhel ~]# iostat
bash: iostat: command not found...
Install package 'sysstat' to provide command 'iostat'? [N/y] y
    
    
 * Waiting in queue... 
The following packages have to be installed:
lm_sensors-libs-3.4.0-21.20180522git70f7e08.el8.x86_64    Lm_sensors core libraries
sysstat-11.7.3-2.el8.x86_64    Collection of performance monitoring tools for Linux
Proceed with changes? [N/y] y
    
    
 * Waiting in queue... 
 * Waiting for authentication... 
 * Waiting in queue... 
 * Downloading packages... 
 * Requesting data... 
 * Testing changes... 
 * Installing packages... 
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.17    0.05    4.09    0.65    0.00   83.03
    
Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda             206.70      8014.01      1411.92    1224862     215798
sdc               0.69        20.39         0.00       3116          0
sdb               0.69        20.39         0.00       3116          0
dm-0            215.54      7917.78      1449.15    1210154     221488
dm-1              0.64        14.52         0.00       2220          0

If you run the base command without options, iostat displays CPU usage information. It also displays I/O stats for each partition on the system. The output includes totals, as well as per second values for both read and write operations. Also, note that the tps field is the total number of Transfers per second issued to a specific device.

The practical application is this: if you know what hardware is used, then you know what parameters it should be operating within. Once you combine this knowledge with the output of iostat , you can make changes to your system accordingly.

Interval runs

It can be useful in troubleshooting or data gathering phases to have a report run at a given interval. To do this, run the command with the interval (in seconds) at the end:

[root@rhel ~]# iostat -m 10
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.94    0.05    0.35    0.04    0.00   98.62
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              12.18         0.44         0.12       1212        323
sdc               0.04         0.00         0.00          3          0
sdb               0.04         0.00         0.00          3          0
dm-0             12.79         0.43         0.12       1197        329
dm-1              0.04         0.00         0.00          2          0
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.24    0.00    0.15    0.00    0.00   99.61
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.00         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0
dm-0              0.00         0.00         0.00          0          0
dm-1              0.00         0.00         0.00          0          0
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.20    0.00    0.18    0.00    0.00   99.62
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda               0.50         0.00         0.00          0          0
sdc               0.00         0.00         0.00          0          0
sdb               0.00         0.00         0.00          0          0
dm-0              0.50         0.00         0.00          0          0
dm-1              0.00         0.00         0.00          0          0

The above output is from a 30-second run.

You must use Ctrl + C to exit the run.

Easy reading

To clean up the output and make it easier to digest, use the following options:

-m changes the output to megabytes, which is a bit easier to read and is usually better understood by customers or managers.

[root@rhel ~]# iostat -m
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.51    0.09    0.55    0.07    0.00   97.77
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              22.23         0.81         0.21       1211        322
sdc               0.07         0.00         0.00          3          0
sdb               0.07         0.00         0.00          3          0
dm-0             23.34         0.80         0.22       1197        328
dm-1              0.07         0.00         0.00          2          0

-p allows you to specify a particular device to focus in on. You can combine this option with the -m for a nice and tidy look at a particularly concerning device and its partitions.

[root@rhel ~]# iostat -m -p sda
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.19    0.07    0.45    0.06    0.00   98.24
    
Device             tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda              17.27         0.63         0.17       1211        322
sda2             16.83         0.62         0.17       1202        320
sda1              0.10         0.00         0.00          7          2
Advanced stats

If the default values just aren't getting you the information you need, you can use the -x flag to view extended statistics:

[root@rhel ~]# iostat -m -p sda -x 
Linux 4.18.0-193.1.2.el8_2.x86_64 (rhel.test)     06/17/2020     _x86_64_    (4 CPU)
    
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.06    0.06    0.40    0.05    0.00   98.43
    
Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sda             12.20    2.83      0.54      0.14     0.02     0.92   0.16  24.64    0.55    0.50   0.00    45.58    52.37   0.46   0.69
sda2            12.10    2.54      0.54      0.14     0.02     0.92   0.16  26.64    0.55    0.47   0.00    45.60    57.88   0.47   0.68
sda1             0.08    0.01      0.00      0.00     0.00     0.00   0.00  23.53    0.44    1.00   0.00    43.00   161.08   0.57   0.00

Some of the options to pay attention to here are:

There are other values present, but these are the ones to look out for.

Shutting down

This article covers just about everything you need to get started with iostat . If you have other questions or need further explanations of options, be sure to check out the man page or your preferred search engine. For other Linux tips and tricks, keep an eye on Enable Sysadmin!

[Jul 10, 2020] Sonoma Hotel Employs Robot For Contactless Room Service

Jul 10, 2020 | www.zerohedge.com

During the pandemic, readers may recall several of our pieces describing what life would be like in a post corona world.

From restaurants to flying to gambling to hotels to gyms to interacting with people to even housing trends - we highlighted how social distancing would transform the economy.

As the transformation becomes more evident by the week, we want to focus on automation and artificial intelligence - and how these two things are allowing hotels, well at least one in California, to accommodate patrons with contactless room service.

Hotel Trio in Healdsburg, California, is surrounded by wineries and restaurants in Healdsburg/Sonoma County region, recently hired a new worker named "Rosé the Robot" that delivers food, water, wine, beer, and other necessities, reported Sonoma Magazine .

"As Rosé approaches a room with a delivery, she calls the phone to let the guest know she's outside. A tablet-sized screen on Rosé's head greets the guest as they open the door, and confirms the order. Next, she opens a lid on top of her head and reveals a storage compartment containing the ordered items. Rosé then communicates a handful of questions surrounding customer satisfaction via her screen. She bids farewell, turns around and as she heads back toward her docking station near the front desk, she emits chirps that sound like a mix between R2D2 and a little bird," said Sonoma Magazine.

Henry Harteveldt, a travel industry analyst at Atmospheric Research Group in San Francisco, said robots would be integrated into the hotel experience.

"This is a part of travel that will see major growth in the years ahead," Harteveldt said.

Rosé is manufactured by Savioke, a San Jose-based company that has dozens of robots in hotels nationwide.

The tradeoff of a contactless environment where automation and artificial intelligence replace humans to mitigate the spread of a virus is permanent job loss .

[Jul 09, 2020] Bash Shortcuts Gem by Ian Miell

Jul 09, 2020 | zwischenzugs.com

TL;DR

These commands can tell you what key bindings you have in your bash shell by default.

bind -P | grep 'can be'
stty -a | grep ' = ..;'
Background

I'd aways wondered what key strokes did what in bash – I'd picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn't remember many of them anyway.

Then debugging a problem tab completion in 'here' documents, I stumbled across bind.

bind and stty

'bind' is a bash builtin, which means it's not a program like awk or grep, but is picked up and handled by the bash program itself.

It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).

Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:

stty -a | grep ' = ..;'

These take precedence and can be confusing if you've tried to bind the same thing in your shell! Further confusion is caused by the fact that '^D' means 'CTRL and d pressed together whereas in bind output, it would be 'C-d'.

edit: am indebted to joepvd from hackernews for this beauty

    $ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/'
    intr = ^C
    quit = ^
    erase = ^?
    kill = ^U
    eof = ^D
    swtch = ^Z
    susp = ^Z
    rprnt = ^R
    werase = ^W
    lnext = ^V
    flush = ^O
Breaking Down the Command
bind -P | grep can

Can be considered (almost) equivalent to a more instructive command:

bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can

'bind -l' lists all the available keystroke functions. For example, 'complete' is the auto-complete function normally triggered by hitting 'tab' twice. The output of this is passed to a sed command which passes each function name to 'bind -q', which queries the bindings.

sed 's/.*/bind -q /'

The output of this is passed for running into /bin/bash.

/bin/bash 2>&1 | grep -v warning: | grep 'can be'

Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.

The '2>&1' puts the error output (the warnings) to the same output channel, filtering out warnings with a 'grep -v' and then filtering on output that describes how to trigger the function.

In the output of bind -q, 'C-' means 'the ctrl key and'. So 'C-c' is the normal. Similarly, 't' means 'escape', so 'tt' means 'autocomplete', and 'e' means escape:

$ bind -q complete
complete can be invoked via "C-i", "ee".

and is also bound to 'C-i' (though on my machine I appear to need to press it twice – not sure why).

Add to bashrc

I added this alias as 'binds' in my bashrc so I could easily get hold of this list in the future.

alias binds="bind -P | grep 'can be'"

Now whenever I forget a binding, I type 'binds', and have a read :)

[adinserter block="1″]

The Zinger

Browsing through the bash manual, I noticed that an option to bind enables binding to

-x keyseq:shell-command

So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):

bind -x '"C-xC-o":bind -P | grep can'

Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues

Now I'm going to sort through my history to see what I type most often :)

This post is based on material from Docker in Practice , available on Manning's Early Access Program. Get 39% off with the code: 39miell

[Jul 09, 2020] My Favourite Secret Weapon strace

Jul 09, 2020 | zwischenzugs.com

Why strace ?

I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.

My favourite secret weapon in this path to understanding is strace.

strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS) calls your program is making.

An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.

Usage Patterns

strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.

My Netcat Server Won't Start!

Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.

Simply running through strace will likely give you clues as to what's gone on.

$  nc -l localhost 80
nc: Permission denied

Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).

Simply put strace at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):

 $ strace nc -l localhost 80
 execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0
 brk(0)                                  = 0x1e7a000
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000
 access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
 open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
 stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory)
 [...]
 open("libglib-2.0.so.0", O_RDONLY)      = -1 ENOENT (No such file or directory)
 open("/etc/ld.so.cache", O_RDONLY)      = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0
 mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000
 mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0
 mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000
 mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000
 close(3)                                = 0
 open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
[...]
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0
 mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000
 mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0
 mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000
 close(3)                                = 0
 mprotect(0x7f751babb000, 4096, PROT_READ) = 0
 munmap(0x7f751c994000, 179820)          = 0
 open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 3
 fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)
 fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 open("/etc/gai.conf", O_RDONLY)         = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "# Configuration for getaddrinfo("..., 4096) = 3343
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
 getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.

For each line:

open("/etc/gai.conf", O_RDONLY)         = 3

Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf and the constant O_RDONLY , and the return value was 3 .

How to make sense of this?

Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.

In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":

 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

We might therefore want to understand what "bind" is and why it might be failing.

You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the manpages-dev package, and can be invoked by eg ​​ man 2 bind (I just used strace to determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.

Right there in my man 2 bind page it says:

ERRORS
EACCES The address is protected, and the user is not the superuser.

So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.

My Library Is Not Loading!

Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:

$ strace perl a.pl
execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0
brk(0)                                  = 0xa8f000
[...]fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
brk(0xad1000)                           = 0xad1000
read(3, "use blahlib;\n\n", 4096)       = 14
stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory)
stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device)
[...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000
close(5)                                = 0
mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0
brk(0xb55000)                           = 0xb55000
read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596
brk(0xb77000)                           = 0xb77000
read(4, "", 4096)                       = 0
close(4)                                = 0
read(3, "", 4096)                       = 0
close(3)                                = 0
exit_group(0)                           = ?

We observe that the file is found in what looks like an unusual place.

open("/space/myperllib/blahlib.pm", O_RDONLY) = 4

Inspecting the environment, we see that:

$ env | grep myperl
PERL5LIB=/space/myperllib

So the solution is to set the same env variable before running:

export PERL5LIB=/space/myperllib
Get to know the internals bit by bit

If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file /etc/gai.conf , which I'd never come across before writing this.

Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully) browse C code more and more understanding will grow.

Gotchas

If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.

If you're on solaris, this program doesn't exist – you need to use truss instead.

Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.

Other useful tidbits

You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with -p .

If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and -T ) can help significantly.

vasudevram February 11, 2018 at 5:29 pm

Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:

access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

vasudevram February 11, 2018 at 5:29 pm

I guess that could trigger the other errors.

Benji Wiebe February 11, 2018 at 7:30 pm

A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.

vasudevram February 11, 2018 at 8:24 pm

>A failed access or open system call is not usually an error in the context of launching a program.

Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.

>Generally it is merely checking if a config file exists.

The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.

[Jul 08, 2020] Exit Codes With Special Meanings

Jul 08, 2020 | www.tldp.org

Appendix E. Exit Codes With Special Meanings Table E-1. Reserved Exit Codes

Exit Code Number Meaning Example Comments
1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero" and other impermissible operations
2 Misuse of shell builtins (according to Bash documentation) empty_function() {} Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison ).
126 Command invoked cannot execute /dev/null Permission problem or command is not an executable
127 "command not found" illegal_command Possible problem with $PATH or a typo
128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see first footnote)
128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9)
130 Script terminated by Control-C Ctl-C Control-C is fatal error signal 2 , (130 = 128 + 2, see above)
255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255

According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings, and should therefore be avoided for user-specified exit parameters. Ending a script with exit 127 would certainly cause confusion when troubleshooting (is the error code a "command not found" or a user-defined one?). However, many scripts use an exit 1 as a general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not particularly useful in debugging.

There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more straightforward. [2] All user-defined exit codes in the accompanying examples to this document conform to this standard, except where overriding circumstances exist, as in Example 9-2 .

Issuing a $? from the command-line after a shell script exits gives results consistent with the table above only from the Bash or sh prompt. Running the C-shell or tcsh may give different values in some cases.
Notes
[1] Out of range exit values can result in unexpected exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit 3809 gives an exit code of 225 (3809 % 256 = 225).
[2] An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

[Jul 08, 2020] Exit Codes

From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
Jul 08, 2020 | zwischenzugs.com

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

or

grep not_there /dev/null
echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

[Jul 08, 2020] Returning Values from Bash Functions by Mitch Frazier

Sep 11, 2009 | www.linuxjournal.com

Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.

Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.

The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:

function myfunc()
{
    myresult='some value'
}

myfunc
echo $myresult

The code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.

A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:

function myfunc()
{
    local  myresult='some value'
    echo "$myresult"
}

result=$(myfunc)   # or result=`myfunc`
echo $result

Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.

The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    eval $__resultvar="'$myresult'"
}

myfunc result
echo $result

Since we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.

When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar ). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:

function myfunc()
{
    local  result=$1
    local  myresult='some value'
    eval $result="'$myresult'"
}

myfunc result
echo $result

The reason it doesn't work is because when eval does the second interpretation and evaluates result='some value' , result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.

For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    if [[ "$__resultvar" ]]; then
        eval $__resultvar="'$myresult'"
    else
        echo "$myresult"
    fi
}

myfunc result
echo $result
result2=$(myfunc)
echo $result2

Here, if no variable name is passed to the function, the value is output to the standard output.

Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend of Linux Journal since the early 2000s.


David Krmpotic6 years ago • edited ,

This is the best way: http://stackoverflow.com/a/... return by reference:

function pass_back_a_string() {
eval "$1='foo bar rab oof'"
}

return_var=''
pass_back_a_string return_var
echo $return_var

lxw David Krmpotic6 years ago ,

I agree. After reading this passage, the same idea with yours occurred to me.

phil • 6 years ago ,

Since this page is a top hit on google:

The only real issue I see with returning via echo is that forking the process means no longer allowing it access to set 'global' variables. They are still global in the sense that you can retrieve them and set them within the new forked process, but as soon as that process is done, you will not see any of those changes.

e.g.
#!/bin/bash

myGlobal="very global"

call1() {
myGlobal="not so global"
echo "${myGlobal}"
}

tmp=$(call1) # keep in mind '$()' starts a new process

echo "${tmp}" # prints "not so global"
echo "${myGlobal}" # prints "very global"

lxw • 6 years ago ,

Hello everyone,

In the 3rd method, I don't think the local variable __resultvar is necessary to use. Any problems with the following code?

function myfunc()
{
local myresult='some value'
eval "$1"="'$myresult'"
}

myfunc result
echo $result

code_monk6 years ago • edited ,

i would caution against returning integers with "return $int". My code was working fine until it came across a -2 (negative two), and treated it as if it were 254, which tells me that bash functions return 8-bit unsigned ints that are not protected from overflow

Emil Vikström code_monk5 years ago ,

A function behaves as any other Bash command, and indeed POSIX processes. That is, they can write to stdout, read from stdin and have a return code. The return code is, as you have already noticed, a value between 0 and 255. By convention 0 means success while any other return code means failure.

This is also why Bash "if" statements treat 0 as success and non+zero as failure (most other programming languages do the opposite).

[Jul 07, 2020] The Missing Readline Primer by Ian Miell

Highly recommended!
This is from the book Learn Bash the Hard Way, available for $6.99.
Jul 07, 2020 | zwischenzugs.com

The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes

Readline is one of those technologies that is so commonly used many users don't realise it's there.

I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.

Bash Without Readline

First you're going to see what bash looks like without readline.

In your 'normal' bash shell, hit the TAB key twice. You should see something like this:

    Display all 2335 possibilities? (y or n)

That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.

Hit n to get out of that autocomplete.

Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

Now type:

$ bash --noediting

The --noediting flag starts up bash without the readline library enabled.

If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

$ exit
Other Shortcuts

There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.

$ echo 'some command'

There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I use these two dozens of times a day.

Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.

The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r , and then type ec . You should see the echo command you just ran like this:

    (reverse-i-search)`ec': echo echo

Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts

If you type:

$ bind -p

You will see a list of bindings that readline is capable of. There's a lot of them!

Have a read through if you're interested, but don't worry about understanding them all yet.

If you type:

$ bind -p | grep C-a

you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.

As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \e refers to the Esc key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

    "\e?": possible-completions

in the bind -p output.

Readline and Terminal Options

If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

    "\C-r": reverse-search-history

You might also have seen that there is another binding that allows you to search forward through your history too:

    "\C-s": forward-search-history

What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

Watch out though! Hitting \C-s to search forward through the history might well not work for you.

Why is this, if the binding is there and readline is switched on?

It's because something picked up the \C-s before it got to the readline library: the terminal settings.

The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

If you type:

$ stty -e

you should get output similar to this:

speed 9600 baud; 47 rows; 202 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -oxtabs -onocr -onlret
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W

You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key that we previously represented with a \C .

If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.

If you are in the same situation, type:

$ stty stop undef

Now, if you re-run stty -e , the last two lines might look like this:

[...]
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

where the stop entry now has <undef> underneath it.

Strangely, for me C-r is also bound to 'reprint' above ( ^R ).

But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.

While we are looking at this table:

discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

it's worth noting a few other key bindings that are used regularly.

First, one you may well already be familiar with is \C-c , which interrupts a program, terminating it:

$ sleep 99
[[Hit \C-c]]
^C
$

Similarly, \C-z suspends a program, allowing you to 'foreground' it again and continue with the fg builtin.

$ sleep 10
[[ Hit \C-z]]
^Z
[1]+  Stopped                 sleep 10
$ fg
sleep 10

\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

Finally, \C-w deletes the word before the cursor

These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.

Daz April 29, 2019 at 11:15 pm

Hi Ian,

What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2

stty -e
stty: invalid argument '-e'
Try 'stty –help' for more information. Reply

Leon May 14, 2019 at 5:12 am

`stty -a` works for me (Ubuntu 14)

yachris May 16, 2019 at 4:40 pm

You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.

AriSweedler May 17, 2019 at 4:50 am

I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!

I do not know exactly when or how the inputrc is read.

Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .

Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .

[Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin by Valentin Bajrami

The first part is at Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more - Enable Sysadmin
Jul 02, 2020 | www.redhat.com
These tips and tricks will make your Linux command line experience easier and more efficient.

Image

Photo by Jonathan Meyer from Pexels

More Linux resources

This blog post is the second of two covering some practical tips and tricks to get the most out of the Bash shell. In part one , I covered history, last argument, working with files and directories, reading files, and Bash functions. In this segment, I cover shell variables, find, file descriptors, and remote operations.

Use shell variables

The Bash variables are set by the shell when invoked. Why would I use hostname when I can use $HOSTNAME, or why would I use whoami when I can use $USER? Bash variables are very fast and do not require external applications.

These are a few frequently-used variables:

$PATH
$HOME
$USER
$HOSTNAME
$PS1
..
$PS4

Use the echo command to expand variables. For example, the $PATH shell variable can be expanded by running:

$> echo $PATH

[ Download now: A sysadmin's guide to Bash scripting . ]

Use the find command

The find command is probably one of the most used tools within the Linux operating system. It is extremely useful in interactive shells. It is also used in scripts. With find I can list files older or newer than a specific date, delete them based on that date, change permissions of files or directories, and so on.

Let's get more familiar with this command.

To list files older than 30 days, I simply run:

$> find /tmp -type f -mtime +30

To delete files older than 30 days, run:

$> find /tmp -type f -mtime +30 -exec rm -rf {} \;

or

$> find /tmp -type f -mtime +30 -exec rm -rf {} +

While the above commands will delete files older than 30 days, as written, they fork the rm command each time they find a file. This search can be written more efficiently by using xargs :

$> find /tmp -name '*.tmp' -exec printf '%s\0' {} \; | xargs -0 rm

I can use find to list sha256sum files only by running:

$> find . -type f -exec sha256sum {} +

And now to search for and get rid of duplicate .jpg files:

$> find . -type f -name '*.jpg' -exec sha256sum {} + | sort -uk1,1
Reference file descriptors

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

Execute remote operations

I use Telnet, netcat, Nmap, and other tools to test whether a remote service is up and whether I can connect to it. These tools are handy, but they aren't installed by default on all systems.

Fortunately, there is a simple way to test a connection without using external tools. To see if a remote server is running a web, database, SSH, or any other service, run:

$> timeout 3 bash -c '</dev/tcp/remote_server/remote_port' || echo "Failed to connect"

For example, to see if serverA is running the MariaDB service:

$> timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed to connect"

If the connection fails, the Failed to connect message is displayed on your screen.

Assume serverA is behind a firewall/NAT. I want to see if the firewall is configured to allow a database connection to serverA , but I haven't installed a database server yet. To emulate a database port (or any other port), I can use the following:

[serverA ~]# nc -l 3306

On clientA , run:

[clientA ~]# timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed"

While I am discussing remote connections, what about running commands on a remote server over SSH? I can use the following command:

$> ssh remotehost <<EOF  # Press the Enter key here
> ls /etc
EOF

This command runs ls /etc on the remote host.

I can also execute a local script on the remote host without having to copy the script over to the remote server. One way is to enter:

$> ssh remote_host 'bash -s' < local_script

Another example is to pass environment variables locally to the remote server and terminate the session after execution.

$> exec ssh remote_host ARG1=FOO ARG2=BAR 'bash -s' <<'EOF'
> printf %s\\n "$ARG1" "$ARG2"
> EOF
Password:
FOO
BAR
Connection to remote_host closed.

There are many other complex actions I can perform on the remote host.

Wrap up

There is certainly more to Bash than I was able to cover in this two-part blog post. I am sharing what I know and what I deal with daily. The idea is to familiarize you with a few techniques that could make your work less error-prone and more fun.

[ Want to test your sysadmin skills? Take a skills assessment today. ] Valentin Bajrami

Valentin is a system engineer with more than six years of experience in networking, storage, high-performing clusters, and automation. He is involved in different open source projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators. More about me

[Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin

Notable quotes:
"... No such file or directory ..."
Jul 07, 2020 | www.redhat.com

Reference file descriptors

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Jul 06, 2020] BASH Shell Redirect stderr To stdout ( redirect stderr to a File ) by Vivek Gite

Jun 06, 2020 | www.cyberciti.biz

... ... ...

Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

For example, use the grep command for recursive search in the $HOME directory and redirect all errors (stderr) to a file name grep-errors.txt as follows:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
$ cat /tmp/grep-errors.txt

Sample outputs:

grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address
grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory
grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory
grep: /home/vivek/.byobu/.ssh-agent: No such device or address
Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
We can als use the following syntax:
$ command > file-name 2>&1
We can write both stderr and stdout to two different files too. Let us try out our previous grep command example:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
$ cat /tmp/grep-outputs.txt

Redirecting stderr to stdout to a file or another command

Here is another useful example where both stderr and stdout sent to the more command instead of a file:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1
$ command-name > file.txt 2>&1
## bash only ##
$ command2 &> filename
$ sudo find / -type f -iname ".env" &> /tmp/search.txt

Redirection takes from left to right. Hence, order matters. For example:
command-name 2>&1 > file.txt ## wrong ##
command-name > file.txt 2>&1 ## correct ##

How to redirect stderr to stdout in Bash script

A sample shell script used to update VM when created in the AWS/Linode server:

#!/usr/bin/env bash
# Author - nixCraft under GPL v2.x+
# Debian/Ubuntu Linux script for EC2 automation on first boot
# ------------------------------------------------------------
# My log file - Save stdout to $LOGFILE
LOGFILE="/root/logs.txt"
 
# My error file - Save stderr to $ERRFILE
ERRFILE="/root/errors.txt"
 
# Start it 
printf "Starting update process ... \n" 1>"${LOGFILE}"
 
# All errors should go to error file 
apt-get -y update 2>"${ERRFILE}"
apt-get -y upgrade 2>>"${ERRFILE}"
printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}"
shutdown -r now 2>>"${ERRFILE}"

Our last example uses the exec command and FDs along with trap and custom bash functions:

#!/bin/bash
# Send both stdout/stderr to a /root/aws-ec2-debian.log file
# Works with Ubuntu Linux too.
# Use exec for FD and trap it using the trap
# See bash man page for more info
# Author:  nixCraft under GPL v2.x+
# ---------------------------------------------
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/root/aws-ec2-debian.log 2>&1
 
# log message
log(){
        local m="$@"
        echo ""
        echo "*** ${m} ***"
        echo ""
}
 
log "$(date) @ $(hostname)"
## Install stuff ##
log "Updating up all packages"
export DEBIAN_FRONTEND=noninteractive
apt-get -y clean
apt-get -y update
apt-get -y upgrade
apt-get -y --purge autoremove
 
## Update sshd config ##
log "Configuring sshd_config"
sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g'  /etc/ssh/sshd_config
 
## Hide process from other users ##
log "Update /proc/fstab to hide process from each other"
echo 'proc    /proc    proc    defaults,nosuid,nodev,noexec,relatime,hidepid=2     0     0' >> /etc/fstab
 
## Install LXD and stuff ##
log "Installing LXD/wireguard/vnstat and other packages on this box"
apt-get -y install lxd wireguard vnstat expect mariadb-server 
 
log "Configuring mysql with mysql_secure_installation"
SECURE_MYSQL_EXEC=$(expect -c "
set timeout 10
spawn mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"$MYSQL\r\"
expect \"Change the root password?\"
send \"n\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"y\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof
")
 
# log to file #
echo "   $SECURE_MYSQL_EXEC   "
# We no longer need expect 
apt-get -y remove expect
 
# Reboot the EC2 VM
log "END: Rebooting requested @ $(date) by $(hostname)"
reboot
WANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?

Try the tee command as follows:
command1 2>&1 | tee filename
Here is how to use it insider shell script too:

#!/usr/bin/env bash
{
   command1
   command2 | do_something
} 2>&1 | tee /tmp/outputs.log
Conclusion

In this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr. We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash man page here :

Operator Description Examples
command>filename Redirect stdout to file "filename." date > output.txt
command>>filename Redirect and append stdout to file "filename." ls -l >> dirs.txt
command 2>filename Redirect stderr to file "filename." du -ch /snaps/ 2> space.txt
command 2>>filename Redirect and append stderr to file "filename." awk '{ print $4}' input.txt 2>> data.txt
command &>filename
command >filename 2>&1
Redirect both stdout and stderr to file "filename." grep -R foo /etc/ &>out.txt
command &>>filename
command >>filename 2>&1
Redirect both stdout and stderr append to file "filename." whois domain &>>log.txt

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . RELATED TUTORIALS

  1. Matt Kukowski says: January 29, 2014 at 6:33 pm

    In pre-bash4 days you HAD to do it this way:

    cat file > file.txt 2>&1

    now with bash 4 and greater versions you can still do it the old way but

    cat file &> file.txt

    The above is bash4+ some OLD distros may use prebash4 but I think they are alllong gone by now. Just something to keep in mind.

  2. iamfrankenstein says: June 12, 2014 at 8:35 pm

    I really love: " command2>&1 | tee logfile.txt "

    because tee log's everything and prints to stdout . So you stil get to see everything! You can even combine sudo to downgrade to a log user account and add date's subject and store it in a default log directory :)

[Jul 05, 2020] Learn Bash the Hard Way by Ian Miell [Leanpub PDF-iPad-Kindle]

Highly recommended!
Jul 05, 2020 | leanpub.com


skeptic
5.0 out of 5 stars Reviewed in the United States on July 2, 2020

A short (160 pages) book that covers some difficult aspects of bash needed to customize bash env.

Whether we want it or not, bash is the shell you face in Linux, and unfortunately, it is often misunderstood and misused. Issues related to creating your bash environment are not well addressed in existing books. This book fills the gap.

Few authors understand that bash is a complex, non-orthogonal language operating in a complex Linux environment. To make things worse, bash is an evolution of Unix shell and is a rather old language with warts and all. Using it properly as a programming language requires a serious study, not just an introduction to the basic concepts. Even issues related to customization of dotfiles are far from trivial, and you need to know quite a bit to do it properly.

At the same time, proper customization of bash environment does increase your productivity (or at least lessens the frustration of using Linux on the command line ;-)

The author covered the most important concepts related to this task, such as bash history, functions, variables, environment inheritance, etc. It is really sad to watch like the majorly of Linux users do not use these opportunities and forever remain on the "level zero" using default dotfiles with bare minimum customization.

This book contains some valuable tips even for a seasoned sysadmin (for example, the use of !& in pipes), and as such, is worth at least double of suggested price. It allows you intelligently customize your bash environment after reading just 160 pages and doing the suggested exercises.

Contents:

[Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell

Highly recommended!
Notable quotes:
"... Material here based on material from my book Learn Bash the Hard Way . Free preview available here . ..."
"... natively in bash ..."
Jul 04, 2020 | zwischenzugs.com

Here are some tips that might help you be more productive with bash.

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you'll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

... ... ...


Material here based on material from my book
Learn Bash the Hard Way .
Free preview available here .


3) shopt vs set

This one bothered me for a while.

What's the difference between set and shopt ?

set s we saw before , but shopt s look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.

4) Here Docs and Here Strings

'Here docs' are files created inline in the shell.

The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc

Notice that:

Lesser known is the 'here string':

$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash .

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .

If you want to use glob patterns that are greedy (see globbing here ) then you double up:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}
$ echo ${VAR%%*FOOTER}
6) ​Variable Defaults

These are very handy when you're knocking up scripts quickly.

If you have a variable that's not set, you can 'default' them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second .

Observer how the third argument's default has been assigned, but not the first two.

You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap built-in can be used to 'catch' when a signal is sent to your script.

Here's an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C , CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can't be trapped in this way

But mostly I've used this for 'cleanups' like the above, which serve their purpose.

8) Shell Variables

It's well worth getting to know the standard shell variables available to you . Here are some of my favourites:

RANDOM

Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE
REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}
LINENO and SECONDS

Handy for debugging

$ echo ${LINENO}
115
$ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two 'lines' above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

... ... ...

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).

What I didn't know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

... ... ...

[Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell

Highly recommended!
Notable quotes:
"... The "last argument" one: !$ ..."
"... The " n th argument" one: !:2 ..."
"... The "all the arguments": !* ..."
"... The "last but n " : !-2:$ ..."
"... The "get me the folder" one: !$:h ..."
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
Oct 02, 2019 | opensource.com

7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now

Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.

So most of them were never committed to memory.

More on Bash This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).

When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

1. The "last argument" one: !$

If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.

Consider this scenario:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory

Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:

$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place

and the command will work.

There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.

2. The " n th argument" one: !:2

Ever done anything like this?

$ tar -cvf afolder afolder.tar
tar: failed to open

Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.

tar_2x.png

When you mix up arguments like that, you can run:

$ ! : 0 ! : 1 ! : 3 ! : 2
tar -cvf afolder.tar afolder

and your reputation will be saved.

The last command's items are zero-indexed and can be substituted in with the number after the !: .

Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.

3. The "all the arguments": !*

Imagine I run a command like:

$ grep '(ping|pong)' afile

The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .

I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).

$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping

You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).

4. The "last but n " : !-2:$

The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.

For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile

I can no longer use the !$ shortcut.

In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place

Again, once you learn it, you may be surprised at how often you need it.

5. The "get me the folder" one: !$:h

This one looks less promising on the face of it, but I use it dozens of times daily.

Imagine I run a command like this:

$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.

The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.

I can do this at a stroke with:

$ cd ! $:h
cd / etc

This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."

6. The "the current line" one: !#:1

For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:

$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak

but once under the fingers, it can be a very quick alternative to

7. The "search and replace" one: !!:gs

This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.

Say I want to tell the world that my s key does not work and outputs f instead:

$ echo my f key doef not work
my f key doef not work

Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:

$ !! :gs / f / s /
echo my s key does not work
my s key does not work

It doesn't work only on single characters; I can replace words or sentences, too:

$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out

Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

$ ping ! #:0:gs/i/o
$ vi / tmp /! : 0 .txt
$ ls ! $:h
$ cd ! - 2 :h
$ touch ! $! - 3 :$ !! ! $.txt
$ cat ! : 1 -$ Conclusion

Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.

If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .


This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.

Orr, August 25, 2019 at 10:39 pm

BTW – you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!

Dima August 26, 2019 at 7:40 am

Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.

cbarrick on 03 Oct 2019

Your "current line" example is too contrived. Your example is copying to a backup like this:

$ cp /path/to/some/file !#:1.bak

But a better way to write that is with filename generation:

$ cp /path/to/some/file{,.bak}

That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.

Darryl Martin August 26, 2019 at 4:41 pm

I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.

A couple additions I'm sure you know.

  1. I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
  2. I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.

Ricardo J. Barberis on 06 Oct 2019

The problem with bash's history shorcuts for me is... that I never had the need to learn them.

Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).

Examples:

Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.

YMMV of course.

[Jul 02, 2020] Some Relatively Obscure Bash Tips zwischenzugs

Jul 02, 2020 | zwischenzugs.com

2) |&

You may already be familiar with 2>&1 , which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:

if doesnotexist |& grep 'command not found' >/dev/null
then
  echo oops
fi
3) $''

This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will grep through files looking for UK currency ('£') signs in hexadecimal recursively:

grep -r $'\xc2\xa3' *

You can also use octal:

grep -r $'\302\243' *
4) HISTIGNORE

If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.

This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:

HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"

You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.

5) fc

If readline key bindings aren't under your fingers, then this one may come in handy.

It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.

6) ((i++))

If you can't be bothered with faffing around with variables in bash with the $[] construct, you can use the C-style compound command.

So, instead of:

A=1
A=$[$A+1]
echo $A

you can do:

A=1
((A++))
echo $A

which, especially with more complex calculations, might be easier on the eye.

7) caller

Another builtin bash command, caller gives context about the context of your shell's

SHLVL is a related shell variable which gives the level of depth of the calling stack.

This can be used to create stack traces for more complex bash scripts.

Here's a die function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:

#!/bin/bash
die() {
  local frame=0
  ((FRAMELEVEL=SHLVL - frame))
  echo -n "${FRAMELEVEL}: "
  while caller $frame; do
    ((frame++));
    ((FRAMELEVEL=SHLVL - frame))
    if [[ ${FRAMELEVEL} -gt -1 ]]
    then
      echo -n "${FRAMELEVEL}: "
    fi
  done
  echo "$*"
  exit 1
}

which outputs:

3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port

This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).

Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.

This snippet, for example, makes a web request to a site and returns the output.

exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80
echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9
cat <&9

The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port 80 for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.

Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.

9) Co-processes

Since version 4 of bash it has offered the capability to run named coprocesses.

It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:

coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)

This sets up the coprocess as a subshell with the name testproc .

Within the subshell, there's a never-ending while loop that counts its own iterations with the i variable. It outputs two lines: the iteration number, and a line read in from standard input.

After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:

echo "${testproc[@]}"

in my terminal outputs:

63 60

Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:

echo "${testproc_PID}"

You can now input data to the standard input of this coprocess at will like this:

echo input1 >&"${testproc[1]}"

In this case, the command resolves to: echo input1 >&60 , and the >&[INTEGER] construct ensures the redirection goes to the coprocess's standard input.

Now you can read the output of the coprocess's two lines in a similar way, like this:

read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"

You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.

Here's a complete listing for those who want to cut and paste:

!/bin/bash
coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)
echo "${testproc[@]}"
echo "${testproc_PID}"
echo input1 >&"${testproc[1]}"
read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"
echo "${output1a}"
echo "${output1b}"
echo input2 >&"${testproc[1]}"
read -r output2a <&"${testproc[0]}"
read -r output2b <&"${testproc[0]}"
echo "${output2a}"
echo "${output2b}"

[Jul 02, 2020] Associative arrays in Bash by Seth Kenlon

Apr 02, 2020 | opensource.com

Originally from: Get started with Bash scripting for sysadmins - Opensource.com

Most shells offer the ability to create, manipulate, and query indexed arrays. In plain English, an indexed array is a list of things prefixed with a number. This list of things, along with their assigned number, is conveniently wrapped up in a single variable, which makes it easy to "carry" it around in your code.

Bash, however, includes the ability to create associative arrays and treats these arrays the same as any other array. An associative array lets you create lists of key and value pairs, instead of just numbered values.

The nice thing about associative arrays is that keys can be arbitrary:

$ declare -A userdata
$ userdata [ name ] =seth
$ userdata [ pass ] =8eab07eb620533b083f241ec4e6b9724
$ userdata [ login ] = ` date --utc + % s `

Query any key:

$ echo " ${userdata[name]} "
seth
$ echo " ${userdata[login]} "
1583362192

Most of the usual array operations you'd expect from an array are available.

Resources

[Jul 02, 2020] DevOps is a Myth Effective Software Delivery Enablement

Jul 02, 2020 | otomato.link

DevOps is a Myth

Tags : Agile Books DevOps IT management software delivery

Category : Tools (Practitioner's Reflections on The DevOps Handbook) The Holy Wars of DevOps

Yet another argument explodes online around the 'true nature of DevOps', around 'what DevOps really means' or around 'what DevOps is not'. At each conference I attend we talk about DevOps culture , DevOps mindset and DevOps ways . All confirming one single truth – DevOps is a myth . /img/sapiens.jpg

Now don't get me wrong – in no way is this a negation of its validity or importance. As Y.N.Harrari shows so eloquently in his book 'Sapiens' – myths were the forming power in the development of humankind. It is in fact our ability to collectively believe in these non-objective, imagined realities that allows us to collaborate at large scale, to coordinate our actions, to build pyramids, temples, cities and roads.

There's a Handbook!

I am writing this while finishing the exceptionally well written "DevOps Handbook" . If you really want to know what stands behind the all-too-often misinterpreted buzzword – you better read this cover-to-cover. It presents an almost-no-bullshit deep dive into why, how and what in DevOps. And it comes from the folks who invented the term and have been busy developing its main concepts over the last 7 years.


Now notice – I'm only saying you should read the "DevOps Handbook" if you want to understand what DevOps is about. After finishing it I'm pretty sure you won't have any interest in participating in petty arguments along the lines of 'is DevOps about automation or not?'. But I'm not saying you should read the handbook if you want to know how to improve and speed up your software manufacturing and delivery processes. And neither if you want to optimize your IT organization for innovation and continuous improvement.

Because the main realization that you, as a smart reader, will arrive at – is just that there is no such thing as DevOps. DevOps is a myth .

So What's The Story?

It all basically comes down to this: some IT companies achieve better results than others . Better revenues, higher customer and employee satisfaction, faster value delivery, higher quality. There's no one-size-fits-all formula, there is no magic bullet – but we can learn from these high performers and try to apply certain tools and practices in order to improve the way we work and achieve similar or better results. These tools and processes come from a myriad of management theories and practices. Moreover – they are constantly evolving, so we need to always be learning. But at least we have the promise of better life. That is if we get it all right: the people, the architecture, the processes, the mindset, the org structure, etc.

So it's not about certain tools, cause the tools will change. And it's not about certain practices – because we're creative and frameworks come and go. I don't see too many folks using Kanban boards 10 years from now. (In the same way only the laggards use Gantt charts today) And then the speakers at the next fancy conference will tell you it's mainly about culture. And you know what culture is? It's just a story, or rather a collection of stories that a group of people share. Stories that tell us something about the world and about ourselves. Stories that have only a very relative connection to the material world. Stories that can easily be proven as myths by another group of folks who believe them to be wrong.

But Isn't It True?

Anybody who's studied management theories knows how the approaches have changed since the beginning of the last century. From Taylor's scientific management and down to McGregor's X&Y theory they've all had their followers. Managers who've applied them and swore getting great results thanks to them. And yet most of these theories have been proven wrong by their successors.

In the same way we see this happening with DevOps and Agile. Agile was all the buzz since its inception in 2001. Teams were moving to Scrum, then Kanban, now SAFE and LESS. But Agile didn't deliver on its promise of better life. Or rather – it became so commonplace that it lost its edge. Without the hype, we now realize it has its downsides. And we now hope that maybe this new DevOps thing will make us happy.

You may say that the world is changing fast – that's why we now need new approaches! And I agree – the technology, the globalization, the flow of information – they all change the stories we live in. But this also means that whatever is working for someone else today won't probably work for you tomorrow – because the world will change yet again.

Which means that the DevOps Handbook – while a great overview and historical document and a source of inspiration – should not be taken as a guide to action. It's just another step towards establishing the DevOps myth.

And that takes us back to where we started – myths and stories aren't bad in themselves. They help us collaborate by providing a common semantic system and shared goals. But they only work while we believe in them and until a new myth comes around – one powerful enough to grab our attention.

Your Own DevOps Story

So if we agree that DevOps is just another myth, what are we left with? What do we at Otomato and other DevOps consultants and vendors have to sell? Well, it's the same thing we've been building even before the DevOps buzz: effective software delivery and IT management. Based on tools and processes, automation and effective communication. Relying on common sense and on being experts in whatever myth is currently believed to be true.

As I keep saying – culture is a story you tell. And we make sure to be experts in both the storytelling and the actual tooling and architecture. If you're currently looking at creating a DevOps transformation or simply want to optimize your software delivery – give us a call. We'll help to build your authentic DevOps story, to train your staff and to architect your pipeline based on practice, skills and your organization's actual needs. Not based on myths that other people tell.

[Jul 02, 2020] Import functions and variables into Bash with the source command by Seth Kenlon

Jun 12, 2020 | opensource.com
Source is like a Python import or a Java include. Learn it to expand your Bash prowess. Seth Kenlon (Red Hat) Feed 25 up 2 comments Image by : Opensource.com x Subscribe now

When you log into a Linux shell, you inherit a specific working environment. An environment , in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a command not found error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.

There are many ways to affect your shell environment. You can make modifications in configuration files, such as ~/.bashrc and ~/.profile , you can run services at startup, and you can create your own custom commands or script your own Bash functions .

Add to your environment with source

Bash (along with some other shells) has a built-in command called source . And here's where it can get confusing: source performs the same function as the command . (yes, that's but a single dot), and it's not the same source as the Tcl command (which may come up on your screen if you type man source ). The built-in source command isn't in your PATH at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can type help source .

The . command is POSIX -compliant. The source command is not defined by POSIX but is interchangeable with the . command.

More on Bash According to Bash help , the source command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute with source happens within and affects your current environment.

Before exploring how source can affect your environment, try source on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file called hello.sh :

#!/usr/bin/env bash
echo "hello world"

Using source , you can run this script even without setting the executable bit:

$ source hello.sh
hello world

You can also use the built-in . command for the same results:

$ . hello.sh
hello world

The source and . commands successfully execute the contents of the test file.

Set variables and import functions

You can use source to "import" a file into your shell environment, just as you might use the include keyword in C or C++ to reference a library or the import keyword in Python to bring in a module. This is one of the most common uses for source , and it's a common default inclusion in .bashrc files to source a file called .bash_aliases so that any custom aliases you define get imported into your environment when you log in.

Here's an example of importing a Bash function. First, create a function in a file called myfunctions . This prints your public IP address and your local IP address:

function myip () {
curl http: // icanhazip.com

ip addr | grep inet $IP | \
cut -d "/" -f 1 | \
grep -v 127 \.0 | \
grep -v \:\: 1 | \
awk '{$1=$1};1'
}

Import the function into your shell:

$ source myfunctions

Test your new function:

$ myip
93.184.216.34
inet 192.168.0.23
inet6 fbd4:e85f:49c: 2121 :ce12:ef79:0e77:59d1
inet 10.8.42.38 Search for source

When you use source in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.

If Bash can't find the file to execute, it searches your PATH instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.

These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of /usr/include , so no matter where you are when you source them, Bash finds them.

For instance, you could create a directory called ~/.local/include as a storage area for common functions and then put this block of code into your .bashrc file:

for i in $HOME / .local / include /* ; do 
   source $i
done

This "imports" any file containing custom functions in ~/.local/include into your shell environment.

Bash is the only shell that searches both the current directory and your PATH when you use either the source or the . command.

Using source for open source

Using source or . to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your .bashrc file, consider placing related functions or groups of aliases into dedicated files, and then use source to ingest them.

Get started with Bash scripting for sysadmins Learn the commands and features that make Bash one of the most powerful shells available.

Seth Kenlon (Red Hat) Introduction to automation with Bash scripts In the first article in this four-part series, learn how to create a simple shell script and why they are the best way to automate tasks.

David Both (Correspondent) Bash cheat sheet: Key combos and special syntax Download our new cheat sheet for Bash commands and shortcuts you need to talk to your computer.

[Jul 01, 2020] Use curl to test an application's endpoint or connectivity to an upstream service endpoint

Notable quotes:
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
Jul 01, 2020 | opensource.com

curl

curl transfers a URL. Use this command to test an application's endpoint or connectivity to an upstream service endpoint. c url can be useful for determining if your application can reach another service, such as a database, or checking if your service is healthy.

As an example, imagine your application throws an HTTP 500 error indicating it can't reach a MongoDB database:

$ curl -I -s myapplication: 5000
HTTP / 1.0 500 INTERNAL SERVER ERROR

The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop:

$ curl -I -s database: 27017
HTTP / 1.0 200 OK

So what could be the problem? Check if your application can get to other places besides the database from the application host:

$ curl -I -s https: // opensource.com
HTTP / 1.1 200 OK

That seems to be okay. Now try to reach the database from the application host. Your application is using the database's hostname, so try that first:

$ curl database: 27017
curl: ( 6 ) Couldn 't resolve host ' database '

This indicates that your application cannot resolve the database because the URL of the database is unavailable or the host (container or VM) does not have a nameserver it can use to resolve the hostname.

[Jul 01, 2020] Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more by Valentin Bajrami

A moderately interesting example here is the example of changing sudo systemctl start into sudo systemctl stop via !!:s/status/start/
But it probably can be optimized so that you do not need to type start (it can be deleted as the last word). So you can try !0 stop instead
Jul 01, 2020 | www.redhat.com

See also Bash bang commands- A must-know trick for the Linux command line - Enable Sysadmin

Let's say I run the following command:

$> sudo systemctl status sshd

Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in history , so I can reference it. I simply run:

$> !!:s/status/start/
sudo systemctl start sshd

The above expression has the following content:

The result is that the sshd service is started.

Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:

$> echo "HISTSIZE=5000" >> ~/.bashrc && source ~/.bashrc

What if I want to display the last three commands in my history? I enter:

$> history 3
 1002  ls
 1003  tail audit.log
 1004  history 3

I run tail on audit.log by referring to the history line number. In this case, I use line 1003:

$> !1003
tail audit.log
Reference the last argument of the previous command

When I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:

$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file

In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.

If I want to cd (change directory) to that location, I enter something like this:

$> cd $_

$> pwd
/home/username/some/very/long/path/to/some/directory

Now simply use a dash character to go back to where I was:

$> cd -
$> pwd
/home/username/

[Jun 28, 2020] Top 10 Resources to Learn Shell Scripting for Free

Jun 28, 2020 | itsfoss.com

me title=

Primis Player Placeholder

me title=

me scrolling=

me width=

Top Free Resources to Learn Shell Scripting
Learn Shell Scripting <img data-attachment-id="80431" data-permalink="https://itsfoss.com/shell-scripting-resources/learn-shell-scripting/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=800%2C450&amp;ssl=1" data-orig-size="800,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Learn-Shell-Scripting" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=300%2C169&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?fit=800%2C450&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?ssl=1" alt="Learn Shell Scripting" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?resize=300%2C169&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/Learn-Shell-Scripting.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Don't have Linux installed on your system? No, worries. There are various ways of using Linux terminal on Windows . You may also use online Linux terminals in some cases to practice shell scripting.

1. Learn Shell [Interactive web portal]
Learnshell <img data-attachment-id="80374" data-permalink="https://itsfoss.com/shell-scripting-resources/learnshell/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=800%2C594&amp;ssl=1" data-orig-size="800,594" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="learnshell" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=300%2C223&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?fit=800%2C594&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?ssl=1" alt="Learnshell" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?resize=300%2C223&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learnshell.png?resize=768%2C570&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

If you're looking for an interactive web portal to learn shell scripting and also try it online, Learn Shell is a great place to start.

It covers the basics and offers some advanced exercises as well. The content is usually brief and to the point – hence, I'd recommend you to check this out.

Learn Shell 2. Shell Scripting Tutorial [Web portal]
Shell Scripting Tutorial <img data-attachment-id="80381" data-permalink="https://itsfoss.com/shell-scripting-resources/shell-scripting-tutorial/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=800%2C377&amp;ssl=1" data-orig-size="800,377" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="shell-scripting-tutorial" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=300%2C141&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?fit=800%2C377&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?ssl=1" alt="Shell Scripting Tutorial" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?resize=300%2C141&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-tutorial.png?resize=768%2C362&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Shell scripting tutorial is web resource that's completely dedicated for shell scripting. You can choose to read the resource for free or can opt to purchase the PDF, book, or the e-book to support it.

Of course, paying for the paperback edition or the e-book is optional. But, the resource should come in handy for free.

Shell Scripting Tutorial 3. Shell Scripting – Udemy (Free video course)
Shell Scripting Udemy <img data-attachment-id="80376" data-permalink="https://itsfoss.com/shell-scripting-resources/shell-scripting-udemy/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=800%2C375&amp;ssl=1" data-orig-size="800,375" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="shell-scripting-udemy" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=300%2C141&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?fit=800%2C375&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?ssl=1" alt="Shell Scripting Udemy" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?resize=300%2C141&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/shell-scripting-udemy.png?resize=768%2C360&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Udemy is unquestionably one of the most popular platforms for online courses. And, in addition to the paid certified courses, it also offers some free stuff that does not include certifications.

Shell Scripting is one of the most recommended free course available on Udemy for free. You can enroll in it without spending anything.

Shell Scripting – Udemy 4. Bash Shell Scripting – Udemy (Free video course)
Bash Shell Scripting <img data-attachment-id="80377" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-shell-scripting/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=800%2C461&amp;ssl=1" data-orig-size="800,461" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="bash-shell-scripting" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=300%2C173&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?fit=800%2C461&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?ssl=1" alt="Bash Shell Scripting" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?resize=300%2C173&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-shell-scripting.png?resize=768%2C443&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Yet another interesting free course focused on bash shell scripting on Udemy. Compared to the previous one, this resource seems to be more popular. So, you can enroll in it and see what it has to offer.

Not to forget that the free Udemy course does not offer any certifications. But, it's indeed an impressive free shell scripting learning resource.

5. Bash Academy [online portal with interactive game]
The Bash Academy <img data-attachment-id="80378" data-permalink="https://itsfoss.com/shell-scripting-resources/the-bash-academy/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=800%2C332&amp;ssl=1" data-orig-size="800,332" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="the-bash-academy" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=300%2C125&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?fit=800%2C332&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?ssl=1" alt="The Bash Academy" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?resize=300%2C125&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/the-bash-academy.png?resize=768%2C319&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

As the name suggests, the bash academy is completely focused on educating the users about bash shell.

It's suitable for both beginners and experienced users even though it does not offer a lot of content. Not just limited to the guide -- but it also used to offer an interactive game to practice which no longer works.

Hence, if this is interesting enough, you can also check out its GitHub page and fork it to improve the existing resources if you want.

Bash Academy 6. Bash Scripting LinkedIn Learning (Free video course)
Learn Bash Scripting Linkedin <img data-attachment-id="80379" data-permalink="https://itsfoss.com/shell-scripting-resources/learn-bash-scripting-linkedin/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=800%2C420&amp;ssl=1" data-orig-size="800,420" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="learn-bash-scripting-linkedin" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=300%2C158&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?fit=800%2C420&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?ssl=1" alt="Learn Bash Scripting Linkedin" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?resize=300%2C158&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/learn-bash-scripting-linkedin.png?resize=768%2C403&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

LinkedIn offers a number of free courses to help you improve your skills and get ready for more job opportunities. You will also find a couple of courses focused on shell scripting to brush up some basic skills or gain some advanced knowledge in the process.

Here, I've linked a course for bash scripting, you can find some other similar courses for free as well.

Bash Scripting (LinkedIn Learning) 7. Advanced Bash Scripting Guide [Free PDF book]
Advanced Bash Scripting Pdf <img data-attachment-id="80380" data-permalink="https://itsfoss.com/shell-scripting-resources/advanced-bash-scripting-pdf/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=800%2C486&amp;ssl=1" data-orig-size="800,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="advanced-bash-scripting-pdf" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=300%2C182&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?fit=800%2C486&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?ssl=1" alt="Advanced Bash Scripting Pdf" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?resize=300%2C182&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/advanced-bash-scripting-pdf.png?resize=768%2C467&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

An impressive advanced bash scripting guide available in the form of PDF for free. This PDF resource does not enforce any copyrights and is completely free in the public domain.

Even though the resource is focused on providing advanced insights. It's also suitable for beginners to refer this resource and start to learn shell scripting.

Advanced Bash Scripting Guide [PDF] 8. Bash Notes for Professionals [Free PDF book]
Bash Notes For Professional <img data-attachment-id="80429" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-notes-for-professional/" data-orig-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=800%2C400&amp;ssl=1" data-orig-size="800,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Bash-Notes-for-Professional" data-image-description="" data-medium-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=300%2C150&amp;ssl=1" data-large-file="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?fit=800%2C400&amp;ssl=1" src="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?ssl=1" alt="Bash Notes For Professional" srcset="https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?w=800&amp;ssl=1 800w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?resize=300%2C150&amp;ssl=1 300w, https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/06/Bash-Notes-for-Professional.jpg?resize=768%2C384&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

This is good reference guide if you are already familiar with Bash Shell scripting or if you just want a quick summary.

This free downloadable book runs over 100 pages and covers a wide variety of scripting topics with the help of brief description and quick examples.

Download Bash Notes for Professional 9. Tutorialspoint [Web portal]
Tutorialspoint Shell <img data-attachment-id="80375" data-permalink="https://itsfoss.com/shell-scripting-resources/tutorialspoint-shell/" data-orig-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=800%2C647&amp;ssl=1" data-orig-size="800,647" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="tutorialspoint-shell" data-image-description="" data-medium-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=300%2C243&amp;ssl=1" data-large-file="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?fit=800%2C647&amp;ssl=1" src="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?ssl=1" alt="Tutorialspoint Shell" srcset="https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?w=800&amp;ssl=1 800w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?resize=300%2C243&amp;ssl=1 300w, https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/06/tutorialspoint-shell.png?resize=768%2C621&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Tutorialspoint is a quite popular web portal to learn a variety of programming languages . I would say this is quite good for starters to learn the fundamentals and the basics.

This may not be suitable as a detailed resource -- but it should be a useful one for free.

Tutorialspoint 10. City College of San Francisco Online Notes [Web portal]
Scripting Notes Ccsf <img data-attachment-id="80382" data-permalink="https://itsfoss.com/shell-scripting-resources/scripting-notes-ccsf/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&amp;ssl=1" data-orig-size="800,291" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="scripting-notes-ccsf" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=300%2C109&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?fit=800%2C291&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?ssl=1" alt="Scripting Notes Ccsf" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=300%2C109&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/scripting-notes-ccsf.png?resize=768%2C279&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

This may not be the best free resource there is -- but if you're ready to explore every type of resource to learn shell scripting, why not refer to the online notes of City College of San Francisco?

I came across this with a random search on the Internet about shell scripting resources.

Again, it's important to note that the online notes could be a bit dated. But, it should be an interesting resource to explore.

City College of San Francisco Notes Honorable mention: Linux Man Page
Bash Linux Man Page <img data-attachment-id="80383" data-permalink="https://itsfoss.com/shell-scripting-resources/bash-linux-man-page/" data-orig-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&amp;ssl=1" data-orig-size="800,437" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="bash-linux-man-page" data-image-description="" data-medium-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=300%2C164&amp;ssl=1" data-large-file="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?fit=800%2C437&amp;ssl=1" src="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?ssl=1" alt="Bash Linux Man Page" srcset="https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?w=800&amp;ssl=1 800w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=300%2C164&amp;ssl=1 300w, https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/06/bash-linux-man-page.png?resize=768%2C420&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Not to forget, the man page for bash should also be a fantastic free resource to explore more about the commands and how it works.

Even if it's not tailored as something that lets you master shell scripting, it is still an important web resource that you can use for free. You can either choose to visit the man page online or just head to the terminal and type the following command to get help:

man bash
Wrapping Up

There are also a lot of popular paid resources just like some of the best Linux books available out there. It's easy to start learning about shell scripting using some free resources available across the web.

In addition to the ones I've mentioned, I'm sure there must be numerous other resources available online to help you learn shell scripting.

Do you like the resources mentioned above? Also, if you're aware of a fantastic free resource that I possibly missed, feel free to tell me about it in the comments below.


Skip Ad

about:blank

about:blank

javascript:void(0)

javascript:void(0)

Like what you read? Please share it with others.

28Shares

Filed Under: List Tagged With: resources , shell

<img alt='' src='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=90&#038;d=mm&#038;r=g' srcset='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=180&#038;d=mm&#038;r=g 2x' class='avatar avatar-90 photo' height='90' width='90' /> About Ankush Das A passionate technophile who also happens to be a Computer Science graduate. You will usually see cats dancing to the beautiful tunes sung by him. comment_count comments Newest Newest Oldest Most Liked Comment as a guest:

[Jun 28, 2020] Restaurant Of The Future - KFC Unveils Automated Store With Robots And Food Lockers

Jun 28, 2020 | www.zerohedge.com

"Restaurant Of The Future" - KFC Unveils Automated Store With Robots And Food Lockers by Tyler Durden Fri, 06/26/2020 - 22:05 Fast-food chain Kentucky Fried Chicken (KFC) has debuted the "restaurant of the future," one where automation dominates the storefront, and little to no interaction is seen between customers and employees, reported NBC News .

After the chicken is fried and sides are prepped by humans, the order is placed on a conveyor belt and travels to the front of the store. A robotic arm waits for the order to arrive, then grabs it off the conveyor belt and places it into a secured food locker.

KFC Moscow robotic-arm takes the order off the conveyor belt

Customers use their credit/debit cards and or the facial recognition system on the food locker to retrieve their order.

KFC Moscow food locker

A KFC representative told NBC News that the new store is located in Moscow and was built months before the virus outbreak. The representative said the contactless store is the future of frontend fast-food restaurants because it's more sanitary.

KFC Moscow storefront

Disbanding human cashiers and order preppers at the front of a fast-food store will be the next big trend in the industry through 2030. Making these restaurants contactless between customers and employees will lower the probabilities of transmitting the virus.

Automating the frontend of a fast-food restaurant will come at a tremendous cost, that is, significant job loss . Nationwide (as of 2018), there were around 3.8 million employed at fast-food restaurants. Automation and artificial intelligence are set displace millions of jobs in the years ahead.

As for the new automated KFC restaurant in Moscow, well, it's a glimpse of what is coming to America - this will lead to the widespread job loss that will force politicians to unveil universal basic income .

[Jun 26, 2020] Vim show line numbers by default on Linux

Notable quotes:
"... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
"... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
Feb 29, 2020 | www.cyberciti.biz

How do I show line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions.

.... Let us see how to display the line number in vim permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions. My experience shows that line numbers are useful for debugging shell scripts, program code, and configuration files. Let us see how to display the line number in vim permanently.

Vim show line numbers by default

Turn on absolute line numbering by default in vim:

  1. Open vim configuration file ~/.vimrc by typing the following command:
    vim ~/.vimrc
  2. Append set number
  3. Press the Esc key
  4. To save the config file, type :w and hit Enter key
  5. You can temporarily disable the absolute line numbers within vim session, type:
    :set nonumber
  6. Want to enable disabled the absolute line numbers within vim session? Try:
    :set number
  7. We can see vim line numbers on the left side.
Relative line numbers

Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. Once again edit the ~/vimrc, run:
vim ~/vimrc
Finally, turn relative line numbers on:
set relativenumber
Save and close the file in vim text editor.
VIM relative line numbers

How to show "Hybrid" line numbers in Vim by default

What happens when you put the following two config directives in ~/.vimrc ?
set number
set relativenumber

That is right. We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers.

Conclusion

Today we learned about permanent line number settings for the vim text editor. By adding the "set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show line numbers each time vim started. See vim docs here for more info and following tutorials too:

[Jun 26, 2020] Taking a deeper dive into Linux chroot jails by Glen Newell

Notable quotes:
"... New to Linux containers? Download the Containers Primer and learn the basics. ..."
Mar 02, 2020 | www.redhat.com

Dive deeper into the chroot command and learn how to isolate specific services and specific users.

More Linux resources

In part one, How to setup Linux chroot jails, I covered the chroot command and you learned to use the chroot wrapper in sshd to isolate the sftpusers group. When you edit sshd_config to invoke the chroot wrapper and give it matching characteristics, sshd executes certain commands within the chroot jail or wrapper. You saw how this technique could potentially be useful to implement contained, rather than secure, access for remote users.

Expanded example

I'll start by expanding on what I did before, partly as a review. Start by setting up a custom directory for remote users. I'll use the sftpusers group again.

Start by creating the custom directory that you want to use, and setting the ownership:

# mkdir -p /sftpusers/chroot
# chown root:root /sftpusers/chroot

This time, make root the owner, rather than the sftpusers group. This way, when you add users, they don't start out with permission to see the whole directory.

Next, create the user you want to restrict (you need to do this for each user in this case), add the new user to the sftpusers group, and deny a login shell because these are sftp users:

# useradd sanjay -g sftpusers -s /sbin/nologin
# passwd sanjay

Then, create the directory for sanjay and set the ownership and permissions:

# mkdir /sftpusers/chroot/sanjay
# chown sanjay:sftpusers /sftpusers/chroot/sanjay
# chmod 700 /sftpusers/chroot/sanjay

Next, edit the sshd_config file. First, comment out the existing subsystem invocation and add the internal one:

#Subsystem sftp /usr/libexec/openssh/sftp-server
Subsystem sftp internal-sftp

Then add our match case entry:

Match Group sftpusers
ChrootDirectory /sftpusers/chroot/
ForceCommand internal-sftp
X11Forwarding no
AllowTCPForwarding no

Note that you're back to specifying a directory, but this time, you have already set the ownership to prevent sanjay from seeing anyone else's stuff. That trailing / is also important.

Then, restart sshd and test:

[skipworthy@milo ~]$ sftp sanjay@showme
sanjay@showme's password:
Connected to sanjay@showme.
sftp> ls
sanjay
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> ls
sanjay
sftp> touch test
Invalid command.

So. Sanjay can only see his own folder and needs to cd into it to do anything useful.

Isolating a service or specific user

Now, what if you want to provide a usable shell environment for a remote user, or create a chroot jail environment for a specific service? To do this, create the jailed directory and the root filesystem, and then create links to the tools and libraries that you need. Doing all of this is a bit involved, but Red Hat provides a script and basic instructions that make the process easier.

Note: I've tested the following in Red Hat Enterprise Linux 7 and 8, though my understanding is that this capability was available in Red Hat Enterprise Linux 6. I have no reason to think that this script would not work in Fedora, CentOS or any other Red Hat distro, but your mileage (as always) may vary.

First, make your chroot directory:

# mkdir /chroot

Then run the script from yum that installs the necessary bits:

# yum --releasever=/ --installroot=/chroot install iputils vim python

The --releasever=/ flag passes the current local release info to initialize a repo in the new --installroot , defines where the new install location is. In theory, you could make a chroot jail that was based on any version of the yum or dnf repos (the script will, however, still start with the current system repos).

With this tool, you install basic networking utilities like the VIM editor and Python. You could add other things initially if you want to, including whatever service you want to run inside this jail. This is also one of the cool things about yum and dependencies. As part of the dependency resolution, yum makes the necessary additions to the filesystem tree along with the libraries. It does, however, leave out a couple of things that you need to add next. I'll will get to that in a moment.

By now, the packages and the dependencies have been installed, and a new GPG key was created for this new repository in relation to this new root filesystem. Next, mount your ephemeral filesystems:

# mount -t proc proc /chroot/proc/
# mount -t sysfs sys /chroot/sys/

And set up your dev bindings:

# mount -o bind /dev/pts /chroot/dev/pts
# mount -o bind /dev/pts /chroot/dev/pts

Note that these mounts will not survive a reboot this way, but this setup will let you test and play with a chroot jail environment.

Now, test to check that everything is working as you expect:

# chroot /chroot
bash-4.2# ls
bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr

You can see that the filesystem and libraries were successfully added:

bash-4.2# pwd
/
bash-4.2# cd ..

From here, you see the correct root and can't navigate up:

bash-4.2# exit
exit
#

Now you've exited the chroot wrapper, which is expected because you entered it from a local login shell as root. Normally, a remote user should not be able to do this, as you saw in the sftp example:

[skipworthy@milo ~]$ ssh root@showme
root@showme's password:
[root@showme1 ~]# chroot /chroot
bash-4.2#

Note that these directories were all created by root, so that's who owns them. Now, add this chroot to the sshd_config , because this time you will match just this user:

Match User leo
ChrootDirectory /chroot

Then, restart sshd .

You also need to copy the /etc/passwd and /etc/group files from the host system to the /chroot directory:

[root@showme1 ~]# cp -vf /etc/{passwd,group} /chroot/etc/

Note: If you skip the step above, you can log in, but the result will be unreliable and you'll be prone to errors related to conflicting logins

Now for the test:

[skipworthy@milo ~]$ ssh leo@showme
leo@showme's password:
Last login: Thu Jan 30 19:35:36 2020 from 192.168.0.20
-bash-4.2$ ls
-bash-4.2$ pwd
/home/leo

It looks good. Now, can you find something useful to do? Let's have some fun:

[root@showme1 ~]# yum --releasever=/ --installroot=/chroot install httpd

You could drop the releasever=/ , but I like to leave that in because it leaves fewer chances for unexpected results.

[root@showme1 ~]# chroot /chroot
bash-4.2# ls /etc/httpd
conf conf.d conf.modules.d logs modules run
bash-4.2# python
Python 2.7.5 (default, Aug 7 2019, 00:51:29)

So, httpd is there if you want it, but just to demonstrate you can use a quick one-liner from Python, which you also installed:

bash-4.2# python -m SimpleHTTPServer 8000
Serving HTTP on 0.0.0.0 port 8000 ...

And now you have a simple webserver running in a chroot jail. In theory, you can run any number of services from inside the chroot jail and keep them 'contained' and away from other services, allowing you to expose only a part of a larger resource environment without compromising your user's experience.

New to Linux containers? Download the Containers Primer and learn the basics.

[Jun 10, 2020] A Beginners Guide to Snaps in Linux - Part 1 by Aaron Kili

Jun 05, 2020 | www.tecmint.com
Linux Certifications - RHCSA / RHCE Certification | Ansible Automation Certification | LFCS / LFCE Certification In the past few years, the Linux community has been blessed with some remarkable advancements in the area of package management on Linux systems , especially when it comes to universal or cross-distribution software packaging and distribution. One of such advancements is the Snap package format developed by Canonical , the makers of the popular Ubuntu Linux . What are Snap Packages?

Snaps are cross-distribution, dependency-free, and easy to install applications packaged with all their dependencies to run on all major Linux distributions. From a single build, a snap (application) will run on all supported Linux distributions on desktop, in the cloud, and IoT. Supported distributions include Ubuntu, Debian, Fedora, Arch Linux, Manjaro, and CentOS/RHEL.

Snaps are secure – they are confined and sandboxed so that they do not compromise the entire system. They run under different confinement levels (which is the degree of isolation from the base system and each other). More notably, every snap has an interface carefully selected by the snap's creator, based on the snap's requirements, to provide access to specific system resources outside of their confinement such as network access, desktop access, and more.

Another important concept in the snap ecosystem is Channels . A channel determines which release of a snap is installed and tracked for updates and it consists of and is subdivided by, tracks, risk-levels, and branches.

The main components of the snap package management system are:

Besides, snaps also update automatically. You can configure when and how updates occur. By default, the snapd daemon checks for updates up to four times a day: each update check is called a refresh . You can also manually initiate a refresh.

How to Install Snapd in Linux

As described above, the snapd daemon is the background service that manages and maintains your snap environment on a Linux system, by implementing the confinement policies and controlling the interfaces that allow snaps to access specific system resources. It also provides the snap command and serves many other purposes.

To install the snapd package on your system, run the appropriate command for your Linux distribution.

------------ [On Debian and Ubuntu] ------------ 
$ sudo apt update 
$ sudo apt install snapd

------------ [On Fedora Linux] ------------
# dnf install snapd                     

------------ [On CentOS and RHEL] ------------
# yum install epel-release 
# yum install snapd             

------------ [On openSUSE - replace openSUSE_Leap_15.0 with the version] ------------
$ sudo zypper addrepo --refresh https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.0 snappy
$ sudo zypper --gpg-auto-import-keys refresh
$ sudo zypper dup --from snappy
$ sudo zypper install snapd

------------ [On Manjaro Linux] ------------
# pacman -S snapd

------------ [On Arch Linux] ------------
# git clone https://aur.archlinux.org/snapd.git
# cd snapd
# makepkg -si

After installing snapd on your system, enable the systemd unit that manages the main snap communication socket, using the systemctl commands as follows.

On Ubuntu and its derivatives, this should be triggered automatically by the package installer.

$ sudo systemctl enable --now snapd.socket

Note that you can't run the snap command if the snapd.socket is not running. Run the following commands to check if it is active and is enabled to automatically start at system boot.

$ sudo systemctl is-active snapd.socket
$ sudo systemctl status snapd.socket
$ sudo systemctl is-enabled snapd.socket
Check Snapd Service Status <img aria-describedby="caption-attachment-37630" src="https://www.tecmint.com/wp-content/uploads/2020/06/check-if-snapd-socket-is-running.png" alt="Check Snapd Service Status" width="722" height="293" />

Check Snapd Service Status

Next, enable classic snap support by creating a symbolic link between /var/lib/snapd/snap and /snap as follows.

$ sudo ln -s /var/lib/snapd/snap /snap

To check the version of snapd and snap command-line tool installed on your system, run the following command.

$ snap version
Check Snapd and Snap Version <img aria-describedby="caption-attachment-37631" src="https://www.tecmint.com/wp-content/uploads/2020/06/check-snapd-and-snap-version.png" alt="Check Snapd and Snap Version" width="510" height="124" />

Check Snapd and Snap Version How to Install Snaps in Linux

The snap command allows you to install, configure, refresh and remove snaps, and interact with the larger snap ecosystem.

Before installing a snap , you can check if it exists in the snap store. For example, if the application belongs in the category of " chat servers " or " media players ", you can run these commands to search for it, which will query the store for available packages in the stable channel.

$ snap find "chat servers"
$ snap find "media players"
Find Applications in Snap Store <img aria-describedby="caption-attachment-37632" src="https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps.png" alt="Find Applications in Snap Store" width="899" height="515" srcset="https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps.png 899w, https://www.tecmint.com/wp-content/uploads/2020/06/find-snaps-768x440.png 768w" sizes="(max-width: 899px) 100vw, 899px" />

Find Applications in Snap Store

To show detailed information about a snap , for example, rocketchat-server , you can specify its name or path. Note that names are looked for both in the snap store and in the installed snaps.

$ snap info rocketchat-server
Get Info About Application in Snap <img aria-describedby="caption-attachment-37633" src="https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap.png" alt="Get Info About Application in Snap" width="824" height="485" srcset="https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap.png 824w, https://www.tecmint.com/wp-content/uploads/2020/06/get-more-details-about-a-snap-768x452.png 768w" sizes="(max-width: 824px) 100vw, 824px" />

Get Info About Application in Snap

To install a snap on your system, for example, rocketchat-server , run the following command. If no options are provided, a snap is installed tracking the " stable " channel, with strict security confinement.

$ sudo snap install rocketchat-server
Install Application from Snap Store <img aria-describedby="caption-attachment-37634" src="https://www.tecmint.com/wp-content/uploads/2020/06/rocketchat-server-snap-installed-successfully.png" alt="Install Application from Snap Store" width="700" height="57" />

Install Application from Snap Store

You can opt to install from a different channel: edge , beta , or candidate , for one reason or the other, using the --edge , --beta , or --candidate options respectively. Or use the --channel option and specify the channel you wish to install from.

$ sudo snap install --edge rocketchat-server        
$ sudo snap install --beta rocketchat-server
$ sudo snap install --candidate rocketchat-server
Manage Snaps in Linux

In this section, we will learn how to manage snaps in Linux system.

Viewing Installed Snaps

To display a summary of snaps installed on your system, use the following command.

$ snap list
List Installed Snaps <img aria-describedby="caption-attachment-37635" src="https://www.tecmint.com/wp-content/uploads/2020/06/list-installed-snaps.png" alt="List Installed Snaps" width="662" height="156" />

List Installed Snaps

To list the current revision of a snap being used, specify its name. You can also list all its available revisions by adding the --all option.

$ snap list mailspring
OR
$ snap list --all mailspring
List All Installation Versions of Snap <img aria-describedby="caption-attachment-37636" src="https://www.tecmint.com/wp-content/uploads/2020/06/list-all-versions-of-a-snap.png" alt="List All Installation Versions of Snap" width="609" height="155" />

List All Installation Versions of Snap Updating and Reverting Snaps

You can update a specified snap, or all snaps in the system if none are specified as follows. The refresh command checks the channel being tracked by the snap and it downloads and installs a newer version of the snap if it is available.

$ sudo snap refresh mailspring
OR
$ sudo snap refresh             #update all snaps on the local system
Refresh a Snap <img aria-describedby="caption-attachment-37637" src="https://www.tecmint.com/wp-content/uploads/2020/06/refresh-a-snap.png" alt="Refresh a Snap" width="591" height="57" />

Refresh a Snap

After updating an app to a new version, you can revert to a previously used version using the revert command. Note that the data associated with the software will also be reverted.

$ sudo snap revert mailspring
Revert a Snap to Older Version <img aria-describedby="caption-attachment-37638" src="https://www.tecmint.com/wp-content/uploads/2020/06/revert-a-snap-to-older-version.png" alt="Revert a Snap to Older Version" width="450" height="56" />

Revert a Snap to Older Version

Now when you check all revisions of mailspring , the latest revision is disabled , a previously used revision is now active.

$ snap list --all mailspring
Check Revision of Snap <img aria-describedby="caption-attachment-37639" src="https://www.tecmint.com/wp-content/uploads/2020/06/snap-reverted-to-older-version.png" alt="Check Revision of Snap " width="601" height="90" />

Check Revision of Snap Disabling/Enabling and Removing Snaps

You can disable a snap if you do not want to use it. When disabled, a snap's binaries and services will no longer be available, however, all the data will still be there.

$ sudo snap disable mailspring

If you need to use the snap again, you can enable it back.

$ sudo snap enable mailspring

To completely remove a snap from your system, use the remove command. By default, all of a snap's revisions are removed.

$ sudo snap remove mailspring

To remove a specific revision, use the --revision option as follows.

$ sudo snap remove  --revision=482 mailspring

It is key to note that when you remove a snap , its data (such as internal user, system, and configuration data) is saved by snapd (version 2.39 and higher) as a snapshot, and stored on the system for 31 days. In case you reinstall the snap within the 31 days, you can restore the data.

Conclusion

Snaps are becoming more popular within the Linux community as they provide an easy way to install software on any Linux distribution. In this guide, we have shown how to install and work with snaps in Linux. We covered how to install snapd , install snaps , view installed snaps, update and revert snaps, and disable/enable and remove snaps.

You can ask questions or reach us via the feedback form below. In the next part of this guide, we will cover managing snaps (commands, aliases, services, and snapshots) in Linux.

[May 31, 2020] Eye-catching advances in some AI fields are not real Science AAAS

May 31, 2020 | www.sciencemag.org

Just_Super/iStock.com
Eye-catching advances in some AI fields are not real

By Matthew Hutson May. 27, 2020 , 12:05 PM

Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim -- and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks -- software architectures that loosely mimic the brain. "Fifty papers in," he says, "it became clear that it wasn't obvious what the state of the art even was."

The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly -- and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result , presented in March at the Machine Learning and Systems conference, surprised Blalock's Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. "It's the old saw, right?" Guttag said. "If you can't measure something, it's hard to make it better."

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the "high-water mark was actually set in 2009." Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing "phantom progress" in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers' claims, accuracy had not improved since 2006. "There's always been these waves of hype," Musgrave says.

SIGN UP FOR OUR DAILY NEWSLETTER

Get more great content like this delivered right to you!

Required fields are indicated by an asterisk(*)

Gains in machine-learning algorithms can come from fundamental changes in their architecture, loss function, or optimization strategy -- how they use feedback to improve. But subtle tweaks to any of these can also boost performance, says Zico Kolter, a computer scientist at Carnegie Mellon University who studies image-recognition models trained to be immune to " adversarial attacks " by a hacker. An early adversarial training method known as projected gradient descent (PGD), in which a model is simply trained on both real and deceptive examples, seemed to have been surpassed by more complex methods. But in a February arXiv paper , Kolter and his colleagues found that all of the methods performed about the same when a simple trick was used to enhance them.

Old dogs, new tricks

After modest tweaks, old image-retrieval algorithms perform as well as new ones, suggesting little actual innovation.

Contrastive (2006) ProxyNCA (2017) SoftTriple (2019) 0 25 50 75 100 Accuracy score Original performance Tweaked performance
(GRAPHIC) X. LIU/ SCIENCE ; (DATA) MUSGRAVE ET AL ., ARXIV: 2003.08505

"That was very surprising, that this hadn't been discovered before," says Leslie Rice, Kolter's Ph.D. student. Kolter says his findings suggest innovations such as PGD are hard to come by, and are rarely improved in a substantial way. "It's pretty clear that PGD is actually just the right algorithm," he says. "It's the obvious thing, and people want to find overly complex solutions."

Other major algorithmic advances also seem to have stood the test of time. A big breakthrough came in 1997 with an architecture called long short-term memory (LSTM), used in language translation. When properly trained, LSTMs matched the performance of supposedly more advanced architectures developed 2 decades later. Another machine-learning breakthrough came in 2014 with generative adversarial networks (GANs), which pair networks in a create-and-critique cycle to sharpen their ability to produce images, for example. A 2018 paper reported that with enough computation, the original GAN method matches the abilities of methods from later years.

Kolter says researchers are more motivated to produce a new algorithm and tweak it until it's state-of-the-art than to tune an existing one. The latter can appear less novel, he notes, making it "much harder to get a paper from."

Guttag says there's also a disincentive for inventors of an algorithm to thoroughly compare its performance with others -- only to find that their breakthrough is not what they thought it was. "There's a risk to comparing too carefully." It's also hard work : AI researchers use different data sets, tuning methods, performance metrics, and baselines. "It's just not really feasible to do all the apples-to-apples comparisons."

Some of the overstated performance claims can be chalked up to the explosive growth of the field, where papers outnumber experienced reviewers. "A lot of this seems to be growing pains ," Blalock says. He urges reviewers to insist on better comparisons to benchmarks and says better tools will help. Earlier this year, Blalock's co-author, MIT researcher Jose Gonzalez Ortiz, released software called ShrinkBench that makes it easier to compare pruning algorithms.

Researchers point out that even if new methods aren't fundamentally better than old ones, the tweaks they implement can be applied to their forebears. And every once in a while, a new algorithm will be an actual breakthrough. "It's almost like a venture capital portfolio," Blalock says, "where some of the businesses are not really working, but some are working spectacularly well."

[May 21, 2020] Watchman - A File and Directory Watching Tool for Changes

May 21, 2020 | www.tecmint.com

Watchman – A File and Directory Watching Tool for Changes

by Aaron Kili | Published: March 14, 2019 | Last Updated: April 7, 2020

Linux Certifications - RHCSA / RHCE Certification | Ansible Automation Certification | LFCS / LFCE Certification Watchman is an open source and cross-platform file watching service that watches files and records or performs actions when they change. It is developed by Facebook and runs on Linux, OS X, FreeBSD, and Solaris. It runs in a client-server model and employs the inotify utility of the Linux kernel to provide a more powerful notification. Useful Concepts of Watchman

In this article, we will explain how to install and use watchman to watch (monitor) files and record when they change in Linux. We will also briefly demonstrate how to watch a directory and invoke a script when it changes.

Installing Watchman File Watching Service in Linux

We will install watchman service from sources, so first install these required dependencies libssl-dev , autoconf , automake libtool , setuptools , python-devel and libfolly using following command on your Linux distribution.

----------- On Debian/Ubuntu ----------- 
$ sudo apt install autoconf automake build-essential python-setuptools python-dev libssl-dev libtool 

----------- On RHEL/CentOS -----------
# yum install autoconf automake python-setuptools python-devel openssl-devel libssl-devel libtool 
# yum groupinstall 'Development Tools' 

----------- On Fedora -----------
$ sudo dnf install autoconf automake python-setuptools openssl-devel libssl-devel libtool 
$ sudo dnf groupinstall 'Development Tools'

Once required dependencies installed, you can start building watchman by downloading its github repository, move into the local repository, configure, build and install it using following commands.

$ git clone https://github.com/facebook/watchman.git
$ cd watchman
$ git checkout v4.9.0  
$ ./autogen.sh
$ ./configure
$ make
$ sudo make install
Watching Files and Directories with Watchman in Linux

Watchman can be configured in two ways: (1) via the command-line while the daemon is running in background or (2) via a configuration file written in JSON format.

To watch a directory (e.g ~/bin ) for changes, run the following command.

$ watchman watch ~/bin/
Watch a Directory in Linux <img aria-describedby="caption-attachment-32120" src="https://www.tecmint.com/wp-content/uploads/2019/03/watch-a-directory.png" alt="Watch a Directory in Linux" width="572" height="135" />

Watch a Directory in Linux

The following command writes a configuration file called state under /usr/local/var/run/watchman/<username>-state/ , in JSON format as well as a log file called log in the same location.

You can view the two files using the cat command as show.

$ cat /usr/local/var/run/watchman/aaronkilik-state/state
$ cat /usr/local/var/run/watchman/aaronkilik-state/log

You can also define what action to trigger when a directory being watched for changes. For example in the following command, ' test-trigger ' is the name of the trigger and ~bin/pav.sh is the script that will be invoked when changes are detected in the directory being monitored.

For test purposes, the pav.sh script simply creates a file with a timestamp (i.e file.$time.txt ) within the same directory where the script is stored.

time=`date +%Y-%m-%d.%H:%M:%S`
touch file.$time.txt

Save the file and make the script executable as shown.

$ chmod +x ~/bin/pav.sh

To launch the trigger, run the following command.

$ watchman -- trigger ~/bin 'test-trigger' -- ~/bin/pav.sh
Create a Trigger on Directory <img aria-describedby="caption-attachment-32121" src="https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger.png" alt="Create a Trigger on Directory" width="842" height="135" srcset="https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger.png 842w, https://www.tecmint.com/wp-content/uploads/2019/03/create-a-trigger-768x123.png 768w" sizes="(max-width: 842px) 100vw, 842px" />

Create a Trigger on Directory

When you execute watchman to keep an eye on a directory, its added to the watch list and to view it, run the following command.

$ watchman watch-list
View Watch List <img aria-describedby="caption-attachment-32122" src="https://www.tecmint.com/wp-content/uploads/2019/03/view-watch-list.png" alt="View Watch List " width="572" height="173" />

View Watch List

To view the trigger list for a root , run the following command (replace ~/bin with the root name).

$ watchman trigger-list ~/bin
Show Trigger List for a Root <img aria-describedby="caption-attachment-32124" src="https://www.tecmint.com/wp-content/uploads/2019/03/show-trigger-list-for-a-root.png" alt="Show Trigger List for a Root" width="612" height="401" />

Show Trigger List for a Root

Based on the above configuration, each time the ~/bin directory changes, a file such as file.2019-03-13.23:14:17.txt is created inside it and you can view them using ls command .

$ ls
Test Watchman Configuration <img aria-describedby="caption-attachment-32123" src="https://www.tecmint.com/wp-content/uploads/2019/03/test-watchman-configuration.png" alt="Test Watchman Configuration" width="672" height="648" />

Test Watchman Configuration Uninstalling Watchman Service in Linux

If you want to uninstall watchman , move into the source directory and run the following commands:

$ sudo make uninstall
$ cd '/usr/local/bin' && rm -f watchman 
$ cd '/usr/local/share/doc/watchman-4.9.0 ' && rm -f README.markdown

For more information, visit the Watchman Github repository: https://github.com/facebook/watchman .

You might also like to read these following related articles.

  1. Swatchdog – Simple Log File Watcher in Real-Time in Linux
  2. 4 Ways to Watch or Monitor Log Files in Real Time
  3. fswatch – Monitors Files and Directory Changes in Linux
  4. Pyintify – Monitor Filesystem Changes in Real Time in Linux
  5. Inav – Watch Apache Logs in Real Time in Linux

Watchman is an open source file watching service that watches files and records, or triggers actions, when they change. Use the feedback form below to ask questions or share your thoughts with us.

Sharing is Caring...

[May 20, 2020] The mktemp Command Tutorial With Examples For Beginners

May 20, 2020 | www.ostechnix.com

Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.

To create a new temporary file, simply run:

$ mktemp

You will see an output like below:

/tmp/tmp.U0C3cgGFpk

How To Create temporary file using mktemp command in Linux

As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just an empty file.

You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:

$ mktemp --suffix ".txt"
/tmp/tmp.sux7uKNgIA.txt

How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.

$ mktemp -d

This will create a random empty directory in /tmp folder.

Sample output:

/tmp/tmp.PE7tDnm4uN

Create temporary directory using mktemp command in Linux

All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory will have read, write and executable permissions for the current user, but no permissions for groups or others.

You can verify the file permissions using "ls" command:

$ ls -al /tmp/tmp.U0C3cgGFpk
-rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk

Verify the directory permissions using "ls" command:

$ ls -ld /tmp/tmp.PE7tDnm4uN
drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN

Check file and directory permissions in Linux


Suggested read:


Create temporary files or directories with custom names using mktemp command

As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.

$ mktemp ostechnixXXX
ostechnixq70

Similarly, to create directory, just run:

$ mktemp -d ostechnixXXX
ostechnixcBO

Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp location . In this case, you need to manually clean up them.

Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your choice.

For instance, I want to add "blog" at the end of the filename. Hence, my command would be:

$ mktemp ostechnixXXX --suffix=blog
ostechnixZuZblog

Now we do have the suffix "blog" at the end of the filename.

If you don't want to create any file or directory, you can simply perform a dry run like below.

$ mktemp -u
/tmp/tmp.oK4N4U6rDG

For help, run:

$ mktemp --help
Why do we actually need mktemp?

You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories with permission 700(rwx), so the other users can't access it. For more details, check man pages.

$ man mktemp

[May 08, 2020] Avaaz and We came, we saw, he died (cackle)... Assad must go... Promoting chaos....Cui bono?

Notable quotes:
"... Avaaz supported the establishment of a no-fly zone over Libya, which led to the military intervention in the country in 2011. It was criticized for its pro-intervention stance in the media and blogs. [17] ..."
"... Avaaz supported the civil uprising preceding the Syrian Civil War . This included sending $1.5 million of Internet communications equipment to protesters, and training activists. Later it used smuggling routes to send over $2 million of medical equipment into rebel-held areas of Syria. It also smuggled 34 international journalists into Syria. [10] [18] ..."
"... Yes, pilgrims, my professional deformation leads me to find pattern where there may be none. ..."
"... It would be logical for there to exist connective tissue that relates the Sorosistas, The Clintonistas, the media freaks, Tom Perez' DNC, ..."
"... And then, there is Neil Ferguson the British epidemiologist who sold #10 on the idea of a national lock-down that looks to destroy the UK economy and political system. Antonia Staats his married mistress is a major figure in AVAAZ. He broke curfew twice to get a little bit of that. Coincidence? ..."
"... Even a small amount of google searching suggests that Avaaz is simply another Zionist-funded pro-Israel controlled opposition cutout type of organization. Funded by Zionist George Soros. Main honcho Ricken Patel is associated with Zionist lobby group J Street. ..."
"... Per the commentary above, supported the regime change operation in Syria (a longstanding Zionist goal, refer to the Clean Break plan.) ..."
"... What pillow talk went on between AVAAZ agent Antonia Staats and her Imperial College of London paramour Neil Ferguson right before he briefed Trump/Pence on their corona "we are all gonna die" projections. ..."
May 08, 2020 | turcopolier.typepad.com

"Avaaz claims to unite practical idealists from around the world. [8] Director Ricken Patel said in 2011, "We have no ideology per se. Our mission is to close the gap between the world we have and the world most people everywhere want. Idealists of the world unite!" [12] In practice , Avaaz often supports causes considered progressive, such as calling for global action on climate change , challenging Monsanto, and building greater global support for refugees. [13] [14] [15]

During the 2009 Iranian presidential election protests , Avaaz set up Internet proxy servers to allow protesters to upload videos onto public websites. [16]

Avaaz supported the establishment of a no-fly zone over Libya, which led to the military intervention in the country in 2011. It was criticized for its pro-intervention stance in the media and blogs. [17]

Avaaz supported the civil uprising preceding the Syrian Civil War . This included sending $1.5 million of Internet communications equipment to protesters, and training activists. Later it used smuggling routes to send over $2 million of medical equipment into rebel-held areas of Syria. It also smuggled 34 international journalists into Syria. [10] [18] Avaaz coordinated the evacuation of wounded British photographer Paul Conroy from Homs . Thirteen Syrian activists died during the evacuation operation. [10] [19] Some senior members of other non-governmental organizations working in the Middle East have criticized Avaaz for taking sides in a civil war. [16] As of November 2016, Avaaz continues campaigning for no-fly zones over Syria in general and specifically Aleppo . (Gen. Dunford, Chairman of the Joint Chiefs of Staff of the United States, has said that establishing a no-fly zone means going to war against Syria and Russia. [20] ) It has received criticism from parts of the political blogosphere and has a single digit percentage of its users opposing the petitions, with a number of users ultimately leaving the network. The Avaaz team responded to this criticism by issuing two statements defending their decision to campaign. wiki

----------------

Yes, pilgrims, my professional deformation leads me to find pattern where there may be none. BUT, OTOH, there may BE a pattern. It would be logical for there to exist connective tissue that relates the Sorosistas, The Clintonistas, the media freaks, Tom Perez' DNC, etc., etc., ad nauseam. ...

And then, there is Neil Ferguson the British epidemiologist who sold #10 on the idea of a national lock-down that looks to destroy the UK economy and political system. Antonia Staats his married mistress is a major figure in AVAAZ. He broke curfew twice to get a little bit of that. Coincidence? pl

https://en.wikipedia.org/wiki/Avaaz


Outrage Beyond , 07 May 2020 at 06:41 PM

Even a small amount of google searching suggests that Avaaz is simply another Zionist-funded pro-Israel controlled opposition cutout type of organization. Funded by Zionist George Soros. Main honcho Ricken Patel is associated with Zionist lobby group J Street.

Per the commentary above, supported the regime change operation in Syria (a longstanding Zionist goal, refer to the Clean Break plan.)

Bottom line: not a leftist organization. Faux leftist, controlled opposition, Zionist. Neocons are probably delighted with Avaaz.

Deap , 07 May 2020 at 06:46 PM
It was a ground hog day nightmare when I read the AVAAZ website and found all the "progressive" chestnuts, alive, well and kicking into high gear. This AVAAZ agenda fuels the politics in my state, California, so I know each element well plus how each of of them has failed us so badly. They all teeter on OPM, which the state wide corona shut down has decimated.

What pillow talk went on between AVAAZ agent Antonia Staats and her Imperial College of London paramour Neil Ferguson right before he briefed Trump/Pence on their corona "we are all gonna die" projections.

It all happened so fast - from runs on toilet paper in Australia reported on March 2 to global shutdown on March 16 due to this Imperial College model in just two weeks. Who and what communication network was behind this radical global shift that generated virtually no push back? The message quickly became one case of corona and we are all gonna die. How did that find such a willing audience?

I keep hearing that same echo in my nightmares, never let a crisis go to waste - now with this very distinct German accent on the face of a red-lipped blonde. Too weird to see this AVAAZ "global" network is so darn interested in over-turning a US Supreme Court Citizens United ruling - the old Hilary Clinton rallying cry. What is with that - they care in Malaysia?

Thank you for sunshining this very curious operation and its all too familiar cast of known characters lurking in its history, shadows, funding and leadership circle. Injecting them with Lysol is the better plan.

It is one thing to sic Barr-Durham on US government operations, but who can even explore let alone touch the world of global NGO's.

It does explain where a lot of the Bernie Sanders fervor comes from and how it sustains this energy despite defeat in the US election polls. The AVAAZ agenda winning the hearts and minds of many young people around the world. It will be their world to inherit, if they go down this path; not ours. God speed to all of them. Namaste. Dahl and naan for everyone.

Deap , 07 May 2020 at 07:04 PM
A little internet search also questions if AVAAZ is an intelligence community funded operation, linking key Obama administration players.

Good indoor fun during our national lockdowns - track AVAAZ in all its permutations and recurrent players. Samantha Powers and her hundreds of FISA unmasking requests comes to mind as well as her role in the AVAAZ games played in Syria.

Some AVAAZ fodder from a random internet search: Tinfoil hat fun times - keep digging.

......."Curiously, however, the absence of routine information on the Avaaz website -- board of directors, contact information, etc. -- raises the possibility that the organization is one of innumerable such groups created around the world by intelligence organizations with secret funding to advance hidden agendas.

This was the gist of a 2012 column by Global Research columnist Susanne Posel, headlined Avaaz: The Lobbyist that Masquerades as Online Activism. She alleged that Avaaz purports to be a global avenue for dissent, but channels reform energies on the most sensitive issues into such pro-U.S. positions as support for Israel and the Free Syrian Army......."

turcopolier , 07 May 2020 at 07:11 PM
AVAAZ

It is interesting that AVAAZ stopped accepting foundation and corporate money years ago. So, where do they get their money?

Harlan Easley , 07 May 2020 at 08:06 PM
Looking at him and her. She is out of his league. He is beta soy boy material.

You're probably right.

Fred , 07 May 2020 at 08:16 PM
Deap,

"Who and what communication network ..." ... " but who can even explore let alone touch the world of global NGO's."

Have you noticed how fast Project Veritas gets shut down, how Twitter, FB, etc silence any effective opposition to the message of the left?

"It is one thing to sic Barr-Durham on US government operations,..."
Perhaps now that FlynnFlu is evaporating in the disinfecting sunlight some sunshine should be applied to the H1B visa holders at the aformentioned social media companies and add in Google, Bing, Oath etc. and see how many Communist operatives are there, in addition to "essential employee" non-citizen lefty's pushing the anti-American propaganda. A dinner invitation to Jeff Bezos and his paramore might provide some interesting conversation on just who at Amazon might be involved in the same type of anti-western operations; compare their corporate response to distribution operations in the US vs. France as an example.
https://twitter.com/JamesOKeefeIII/status/1143127502895898625
Furthermore, observe the Google leadership team discussion of the 2016 elections.
https://www.breitbart.com/tech/2018/09/12/leaked-video-google-leaderships-dismayed-reaction-to-trump-election/
Minute 12:30 CFO Ruth Porat
Minute 27:00 Q&A Sergey Brin response on matching donations to employee causes.
Make sure to watch minute 52 on H1B visa holders. With 30,000,000 unemployed Americans just how many of those visas does Google need now? (I don't recall any organization telling China they need open borders immigration since thier hispanic/african/caucasian population percentages are effectively zero, so we might wonder who has been behind that message for the past few decades and why it is only directed at Western democracies).
And the inevitable campaign against "low information" voters and "fake news". I wonder what their take on Russian election interference is now? (Russia cyber trolling! minute 54:44.)

56:20 The inevitable arc of "progress". Make sure you join the fight for Hilary's values. That's the actual corporate leadership message. See the final round of applause at 1:01. Our new overlords know best. Too bad they don't own a mirror, or an ability to reflect on why someone can see the same data and come to a different conclusion of than these experts.

That's just a scratch on the surface. How much money flowed through the Clinton Global Initiative, which NGOs got some cleansed proceeds, which elections were influenced, professors and research sponsored, local communities "organzied". There's plenty to look at and "Isreal, Soros, Zionists" are the least of it.

J , 07 May 2020 at 09:48 PM
State sponsorship?
james , 07 May 2020 at 11:04 PM
avaaz always struck me like some intel agency psyc op... maybe israel like the poster outrage beyond implies.. either way - one could read stay away based on everything about them..
eakens , 08 May 2020 at 01:26 AM
Avaaz means change in Farsi. Interesting.
LondonBob , 08 May 2020 at 03:31 AM
A friend of a friend is a research scientist at Imperial in biology, he is as lefty as they get and I think would be happy to falsify his research to serve his political goals. Besides Imperial is a hard science uni, UCL is top in the University of London for medicine.

Soros and his organisations should be made persona non grata, as the Russians and Hungarians have. Extraordinary his influence in the EU, he has picked up where the Soviet Union left off, funding every organisation that demoralises society, from gay rights to immigration promotion to ethnic lobbies, even in Eastern European countries where there are no minorities.

CK , 08 May 2020 at 08:34 AM
An unusual thing happens once; it could be happenstance.
The thing happens again; it is Reconnaissance.
The thing happens yet again; it is war.
turcopolier , 08 May 2020 at 08:59 AM
J

That is for us to learn.

A. Pols , 08 May 2020 at 09:17 AM
We came, we saw, he died (cackle)... Assad must go...
Promoting chaos....Cui bono?
BABAK MAKKINEJAD , 08 May 2020 at 09:33 AM
eaken

Avaaz means "song" in Persian.

Diana Croissant , 08 May 2020 at 09:35 AM
The one woman standing up to a pompous judge who has called her "selfish" for wanting to earn the money it takes to feed her child is the heroine of this week's news.

Hers is the story of our Democratic Republic, born in the Age of Reason. Voltaire's Candide comes to the best conclusion for the way our elected representatives should make decisions: what works best to help INDIVIDUALS tend their own gardens is the form of government we should pursue.

It's true that young people have hearts and good intentions, but older people in most cases have brains and understand human nature better.

This older person--even when she was young--always distrusted a popular uprising or growing movement.

And if Obama and Hillary are for it, I know I am against it. (That's a more specific life lesson I've learned.)

[May 08, 2020] Configuring Unbound as a simple forwarding DNS server Enable Sysadmin

May 08, 2020 | www.redhat.com

In part 1 of this article, I introduced you to Unbound , a great name resolution option for home labs and small network environments. We looked at what Unbound is, and we discussed how to install it. In this section, we'll work on the basic configuration of Unbound.

Basic configuration

First find and uncomment these two entries in unbound.conf :

interface: 0.0.0.0
interface: ::0

Here, the 0 entry indicates that we'll be accepting DNS queries on all interfaces. If you have more than one interface in your server and need to manage where DNS is available, you would put the address of the interface here.

Next, we may want to control who is allowed to use our DNS server. We're going to limit access to the local subnets we're using. It's a good basic practice to be specific when we can:

Access-control: 127.0.0.0/8 allow  # (allow queries from the local host)
access-control: 192.168.0.0/24 allow
access-control: 192.168.1.0/24 allow

We also want to add an exception for local, unsecured domains that aren't using DNSSEC validation:

domain-insecure: "forest.local"

Now I'm going to add my local authoritative BIND server as a stub-zone:

stub-zone:
        name: "forest"
        stub-addr: 192.168.0.220
        stub-first: yes

If you want or need to use your Unbound server as an authoritative server, you can add a set of local-zone entries that look like this:

local-zone:  "forest.local." static

local-data: "jupiter.forest"         IN       A        192.168.0.200
local-data: "callisto.forest"        IN       A        192.168.0.222

These can be any type of record you need locally but note again that since these are all in the main configuration file, you might want to configure them as stub zones if you need authoritative records for more than a few hosts (see above).

If you were going to use this Unbound server as an authoritative DNS server, you would also want to make sure you have a root hints file, which is the zone file for the root DNS servers.

Get the file from InterNIC . It is easiest to download it directly where you want it. My preference is usually to go ahead and put it where the other unbound related files are in /etc/unbound :

wget https://www.internic.net/domain/named.root -O /etc/unbound/root.hints

Then add an entry to your unbound.conf file to let Unbound know where the hints file goes:

# file to read root hints from.
        root-hints: "/etc/unbound/root.hints"

Finally, we want to add at least one entry that tells Unbound where to forward requests to for recursion. Note that we could forward specific domains to specific DNS servers. In this example, I'm just going to forward everything out to a couple of DNS servers on the Internet:

forward-zone:
        name: "."
        forward-addr: 1.1.1.1
        forward-addr: 8.8.8.8

Now, as a sanity check, we want to run the unbound-checkconf command, which checks the syntax of our configuration file. We then resolve any errors we find.

[root@callisto ~]# unbound-checkconf
/etc/unbound/unbound_server.key: No such file or directory
[1584658345] unbound-checkconf[7553:0] fatal error: server-key-file: "/etc/unbound/unbound_server.key" does not exist

This error indicates that a key file which is generated at startup does not exist yet, so let's start Unbound and see what happens:

[root@callisto ~]# systemctl start unbound

With no fatal errors found, we can go ahead and make it start by default at server startup:

[root@callisto ~]# systemctl enable unbound
Created symlink from /etc/systemd/system/multi-user.target.wants/unbound.service to /usr/lib/systemd/system/unbound.service.

And you should be all set. Next, let's apply some of our DNS troubleshooting skills to see if it's working correctly.

First, we need to set our DNS resolver to use the new server:

[root@showme1 ~]# nmcli con mod ext ipv4.dns 192.168.0.222
[root@showme1 ~]# systemctl restart NetworkManager
[root@showme1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.0.222
[root@showme1 ~]#

Let's run dig and see who we can see:

root@showme1 ~]# dig; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36486
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;.                              IN      NS

;; ANSWER SECTION:
.                       508693  IN      NS      i.root-servers.net.
<snip>

Excellent! We are getting a response from the new server, and it's recursing us to the root domains. We don't see any errors so far. Now to check on a local host:

;; ANSWER SECTION:
jupiter.forest.         5190    IN      A       192.168.0.200

Great! We are getting the A record from the authoritative server back, and the IP address is correct. What about external domains?

;; ANSWER SECTION:
redhat.com.             3600    IN      A       209.132.183.105

Perfect! If we rerun it, will we get it from the cache?

;; ANSWER SECTION:
redhat.com.             3531    IN      A       209.132.183.105

;; Query time: 0 msec
;; SERVER: 192.168.0.222#53(192.168.0.222)

Note the Query time of 0 seconds- this indicates that the answer lives on the caching server, so it wasn't necessary to go ask elsewhere. This is the main benefit of a local caching server, as we discussed earlier.

Wrapping up

While we did not discuss some of the more advanced features that are available in Unbound, one thing that deserves mention is DNSSEC. DNSSEC is becoming a standard for DNS servers, as it provides an additional layer of protection for DNS transactions. DNSSEC establishes a trust relationship that helps prevent things like spoofing and injection attacks. It's worth looking into a bit if you are using a DNS server that faces the public even though It's beyond the scope of this article.

[ Getting started with networking? Check out the Linux networking cheat sheet . ]

[May 06, 2020] How to Synchronize Directories Using Lsyncd on Ubuntu 20.04

May 06, 2020 | www.howtoforge.com

Configure Lsyncd to Synchronize Local Directories

In this section, we will configure Lsyncd to synchronize /etc/ directory to /mnt/ directory on local system.

First, create a directory for Lsyncd with the following command:

mkdir /etc/lsyncd

Next, create a new Lsyncd configuration file and define the source and destination directory that you want to sync.

nano /etc/lsyncd/lsyncd.conf.lua

Add the following lines:

settings {
        logfile = "/var/log/lsyncd/lsyncd.log",
        statusFile = "/var/log/lsyncd/lsyncd.status",
   statusInterval = 20,
   nodaemon   = false
}

sync {
        default.rsync,
        source = "/etc/",
        target = "/mnt"
}

Save and close the file when you are finished.

systemctl start lsyncd
systemctl enable lsyncd

You can also check the status of the Lsyncd service with the following command:

systemctl status lsyncd

You should see the following output:

? lsyncd.service - LSB: lsyncd daemon init script
     Loaded: loaded (/etc/init.d/lsyncd; generated)
     Active: active (running) since Fri 2020-05-01 03:31:20 UTC; 9s ago
       Docs: man:systemd-sysv-generator(8)
    Process: 36946 ExecStart=/etc/init.d/lsyncd start (code=exited, status=0/SUCCESS)
      Tasks: 2 (limit: 4620)
     Memory: 12.5M
     CGroup: /system.slice/lsyncd.service
             ??36921 /usr/bin/lsyncd -pidfile /var/run/lsyncd.pid /etc/lsyncd/lsyncd.conf.lua
             ??36952 /usr/bin/lsyncd -pidfile /var/run/lsyncd.pid /etc/lsyncd/lsyncd.conf.lua

May 01 03:31:20 ubuntu20 systemd[1]: lsyncd.service: Succeeded.
May 01 03:31:20 ubuntu20 systemd[1]: Stopped LSB: lsyncd daemon init script.
May 01 03:31:20 ubuntu20 systemd[1]: Starting LSB: lsyncd daemon init script...
May 01 03:31:20 ubuntu20 lsyncd[36946]:  * Starting synchronization daemon lsyncd
May 01 03:31:20 ubuntu20 lsyncd[36951]: 03:31:20 Normal: --- Startup, daemonizing ---
May 01 03:31:20 ubuntu20 lsyncd[36946]:    ...done.
May 01 03:31:20 ubuntu20 systemd[1]: Started LSB: lsyncd daemon init script.

You can check the Lsyncd log file for more details as shown below:

tail -f /var/log/lsyncd/lsyncd.log

You should see the following output:

/lsyncd/lsyncd.conf.lua
Fri May  1 03:30:57 2020 Normal: Finished a list after exitcode: 0
Fri May  1 03:31:20 2020 Normal: --- Startup, daemonizing ---
Fri May  1 03:31:20 2020 Normal: recursive startup rsync: /etc/ -> /mnt/
Fri May  1 03:31:20 2020 Normal: Startup of /etc/ -> /mnt/ finished.

You can also check the syncing status with the following command:

tail -f /var/log/lsyncd/lsyncd.status

You should be able to see the changes in the /mnt directory with the following command:

ls /mnt/

You should see that all the files and directories from the /etc directory are added to the /mnt directory:

acpi                    dconf           hosts            logrotate.conf       newt                     rc2.d          subuid-
adduser.conf            debconf.conf    hosts.allow      logrotate.d          nginx                    rc3.d          sudoers
alternatives            debian_version  hosts.deny       lsb-release          nsswitch.conf            rc4.d          sudoers.d
apache2                 default         init             lsyncd               ntp.conf                 rc5.d          sysctl.conf
apparmor                deluser.conf    init.d           ltrace.conf          openal                   rc6.d          sysctl.d
apparmor.d              depmod.d        initramfs-tools  lvm                  opt                      rcS.d          systemd
apport                  dhcp            inputrc          machine-id           os-release               resolv.conf    terminfo
apt                     dnsmasq.d       insserv.conf.d   magic                overlayroot.conf         rmt            timezone
at.deny                 docker          iproute2         magic.mime           PackageKit               rpc            tmpfiles.d
bash.bashrc             dpkg            iscsi            mailcap              pam.conf                 rsyslog.conf   ubuntu-advantage
bash_completion         e2scrub.conf    issue            mailcap.order        pam.d                    rsyslog.d      ucf.conf
bash_completion.d       environment     issue.net        manpath.config       passwd                   screenrc       udev
bindresvport.blacklist  ethertypes      kernel           mdadm                passwd-                  securetty      ufw
binfmt.d                fonts           kernel-img.conf  mime.types           perl                     security       update-manager
byobu                   fstab           landscape        mke2fs.conf          php                      selinux        update-motd.d
ca-certificates         fuse.conf       ldap             modprobe.d           pki                      sensors3.conf  update-notifier
ca-certificates.conf    fwupd           ld.so.cache      modules              pm                       sensors.d      vdpau_wrapper.cfg
calendar                gai.conf        ld.so.conf       modules-load.d       polkit-1                 services       vim
console-setup           groff           ld.so.conf.d     mtab                 pollinate                shadow         vmware-tools
cron.d                  group           legal            multipath            popularity-contest.conf  shadow-        vtrgb
cron.daily              group-          letsencrypt      multipath.conf       profile                  shells         vulkan
cron.hourly             grub.d          libaudit.conf    mysql                profile.d                skel           wgetrc
cron.monthly            gshadow         libnl-3          nanorc               protocols                sos.conf       X11
crontab                 gshadow-        locale.alias     netplan              pulse                    ssh            xattr.conf
cron.weekly             gss             locale.gen       network              python3                  ssl            xdg
cryptsetup-initramfs    hdparm.conf     localtime        networkd-dispatcher  python3.8                subgid         zsh_command_not_found
crypttab                host.conf       logcheck         NetworkManager       rc0.d                    subgid-
dbus-1                  hostname        login.defs       networks             rc1.d                    subuid

Configure Lsyncd to Synchronize Remote Directories

In this section, we will configure Lsyncd to synchronize /etc/ directory on the local system to the /opt/ directory on the remote system. Advertisements

Before starting, you will need to setup SSH key-based authentication between the local system and remote server so that the local system can connect to the remote server without password.

On the local system, run the following command to generate a public and private key:

ssh-keygen -t rsa

You should see the following output:

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:c7fhjjhAamFjlk6OkKPhsphMnTZQFutWbr5FnQKSJjE root@ubuntu20
The key's randomart image is:
+---[RSA 3072]----+
| E ..            |
|  ooo            |
| oo= +           |
|=.+ % o . .      |
|o+o@.B oSo. o    |
|ooo=B o .o o o   |
|=o.... o    o    |
|+.    o .. o     |
|     .  ... .    |
+----[SHA256]-----+

The above command will generate a private and public key inside ~/.ssh directory.

Next, you will need to copy the public key to the remote server. You can copy it with the following command: Advertisements

ssh-copy-id root@remote-server-ip

You will be asked to provide the password of the remote root user as shown below:

root@45.58.38.21's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@45.58.38.21'"
and check to make sure that only the key(s) you wanted were added.

Once the user is authenticated, the public key will be appended to the remote user authorized_keys file and connection will be closed.

Now, you should be able log in to the remote server without entering password.

To test it just try to login to your remote server via SSH:

ssh root@remote-server-ip

If everything went well, you will be logged in immediately.

Next, you will need to edit the Lsyncd configuration file and define the rsyncssh and target host variables:

nano /etc/lsyncd/lsyncd.conf.lua

Change the file as shown below:

settings {
        logfile = "/var/log/lsyncd/lsyncd.log",
        statusFile = "/var/log/lsyncd/lsyncd.status",
   statusInterval = 20,
   nodaemon   = false
}

sync {
        default.rsyncssh,
        source = "/etc/",
        host = "remote-server-ip",
        targetdir = "/opt"
}

Save and close the file when you are finished. Then, restart the Lsyncd service to start the sync.

systemctl restart lsyncd

You can check the status of synchronization with the following command:

tail -f /var/log/lsyncd/lsyncd.log

You should see the following output: Advertisements

Fri May  1 04:32:05 2020 Normal: --- Startup, daemonizing ---
Fri May  1 04:32:05 2020 Normal: recursive startup rsync: /etc/ -> 45.58.38.21:/opt/
Fri May  1 04:32:06 2020 Normal: Startup of "/etc/" finished: 0

You should be able to see the changes in the /opt directory on the remote server with the following command:

ls /opt

You should see that all the files and directories from the /etc directory are added to the remote server's /opt directory:

acpi                    dconf           hosts            logrotate.conf       newt                     rc2.d          subuid-
adduser.conf            debconf.conf    hosts.allow      logrotate.d          nginx                    rc3.d          sudoers
alternatives            debian_version  hosts.deny       lsb-release          nsswitch.conf            rc4.d          sudoers.d
apache2                 default         init             lsyncd               ntp.conf                 rc5.d          sysctl.conf
apparmor                deluser.conf    init.d           ltrace.conf          openal                   rc6.d          sysctl.d
apparmor.d              depmod.d        initramfs-tools  lvm                  opt                      rcS.d          systemd
apport                  dhcp            inputrc          machine-id           os-release               resolv.conf    terminfo
apt                     dnsmasq.d       insserv.conf.d   magic                overlayroot.conf         rmt            timezone
at.deny                 docker          iproute2         magic.mime           PackageKit               rpc            tmpfiles.d
bash.bashrc             dpkg            iscsi            mailcap              pam.conf                 rsyslog.conf   ubuntu-advantage
bash_completion         e2scrub.conf    issue            mailcap.order        pam.d                    rsyslog.d      ucf.conf
bash_completion.d       environment     issue.net        manpath.config       passwd                   screenrc       udev
bindresvport.blacklist  ethertypes      kernel           mdadm                passwd-                  securetty      ufw
binfmt.d                fonts           kernel-img.conf  mime.types           perl                     security       update-manager
byobu                   fstab           landscape        mke2fs.conf          php                      selinux        update-motd.d
ca-certificates         fuse.conf       ldap             modprobe.d           pki                      sensors3.conf  update-notifier
ca-certificates.conf    fwupd           ld.so.cache      modules              pm                       sensors.d      vdpau_wrapper.cfg
calendar                gai.conf        ld.so.conf       modules-load.d       polkit-1                 services       vim
console-setup           groff           ld.so.conf.d     mtab                 pollinate                shadow         vmware-tools
cron.d                  group           legal            multipath            popularity-contest.conf  shadow-        vtrgb
cron.daily              group-          letsencrypt      multipath.conf       profile                  shells         vulkan
cron.hourly             grub.d          libaudit.conf    mysql                profile.d                skel           wgetrc
cron.monthly            gshadow         libnl-3          nanorc               protocols                sos.conf       X11
crontab                 gshadow-        locale.alias     netplan              pulse                    ssh            xattr.conf
cron.weekly             gss             locale.gen       network              python3                  ssl            xdg
cryptsetup-initramfs    hdparm.conf     localtime        networkd-dispatcher  python3.8                subgid         zsh_command_not_found
crypttab                host.conf       logcheck         NetworkManager       rc0.d                    subgid-
dbus-1                  hostname        login.defs       networks             rc1.d                    subuid

Conclusion

In the above guide, we learned how to install and configure Lsyncd for local synchronization and remote synchronization. You can now use Lsyncd in the production environment for backup purposes. Feel free to ask me if you have any questions.

[May 06, 2020] Lsyncd - Live Syncing (Mirror) Daemon

May 06, 2020 | axkibe.github.io

Lsyncd - Live Syncing (Mirror) Daemon Description

Lsyncd uses a filesystem event interface (inotify or fsevents) to watch for changes to local files and directories. Lsyncd collates these events for several seconds and then spawns one or more processes to synchronize the changes to a remote filesystem. The default synchronization method is rsync . Thus, Lsyncd is a light-weight live mirror solution. Lsyncd is comparatively easy to install and does not require new filesystems or block devices. Lysncd does not hamper local filesystem performance.

As an alternative to rsync, Lsyncd can also push changes via rsync+ssh. Rsync+ssh allows for much more efficient synchronization when a file or direcotry is renamed or moved to a new location in the local tree. (In contrast, plain rsync performs a move by deleting the old file and then retransmitting the whole file.)

Fine-grained customization can be achieved through the config file. Custom action configs can even be written from scratch in cascading layers ranging from shell scripts to code written in the Lua language . Thus, simple, powerful and flexible configurations are possible.

Lsyncd 2.2.1 requires rsync >= 3.1 on all source and target machines.

License: GPLv2 or any later GPL version.

When to use

Lsyncd is designed to synchronize a slowly changing local directory tree to a remote mirror. Lsyncd is especially useful to sync data from a secure area to a not-so-secure area.

Other synchronization tools

DRBD operates on block device level. This makes it useful for synchronizing systems that are under heavy load. Lsyncd on the other hand does not require you to change block devices and/or mount points, allows you to change uid/gid of the transferred files, separates the receiver through the one-way nature of rsync. DRBD is likely the better option if you are syncing databases.

GlusterFS and BindFS use a FUSE-Filesystem to interject kernel/userspace filesystem events.

Mirror is an asynchronous synchronisation tool that takes use of the inotify notifications much like Lsyncd. The main differences are: it is developed specifically for master-master use, thus running on a daemon on both systems, uses its own transportation layer instead of rsync and is Java instead of Lsyncd's C core with Lua scripting.

Lsyncd usage examples
lsyncd -rsync /home remotehost.org::share/

This watches and rsyncs the local directory /home with all sub-directories and transfers them to 'remotehost' using the rsync-share 'share'.

lsyncd -rsyncssh /home remotehost.org backup-home/

This will also rsync/watch '/home', but it uses a ssh connection to make moves local on the remotehost instead of re-transmitting the moved file over the wire.

Disclaimer

Besides the usual disclaimer in the license, we want to specifically emphasize that neither the authors, nor any organization associated with the authors, can or will be held responsible for data-loss caused by possible malfunctions of Lsyncd.

[May 06, 2020] Creating and managing partitions in Linux with parted Enable Sysadmin by Tyler Carrigan

Apr 30, 2020 | www.redhat.com

Red Hat Sysddmin

Listing partitions with parted

The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the parted command to start parted in interactive mode and list partitions. It will default to your first listed drive. You will then use the print command to display disk information.

[root@rhel ~]# parted /dev/sdc
    GNU Parted 3.2
    Using /dev/sdc
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print                                                            
    Error: /dev/sdc: unrecognised disk label
    Model: ATA VBOX HARDDISK (scsi)                                           
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    (parted)

Creating new partitions with parted

Now that you can see what partitions are active on the system, you are going to add a new partition to /dev/sdc . You can see in the output above that there is no partition table for this partition, so add one by using the mklabel command. Then use mkpart to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.

(parted) mklabel msdos                                                    
    (parted) mkpart                                                           
    Partition type?  primary/extended? primary                                
    File system type?  [ext2]? ext4                                           
    Start? 1                                                                  
    End? 50                                                                   
    (parted)                                                                  
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.3MB  49.3MB  primary  ext4         lba

Modifying existing partitions with parted

Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the print command. You are then going to use the resizepart command to make the modifications.

(parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [50.3MB]? 100                                                       
        
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End    Size    Type     File system  Flags
     1      1049kB  100MB  99.0MB  primary

You can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the print command. You can now resize it back down to 50 MB. Keep in mind that shrinking a partition can cause data loss.

    (parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [100MB]? 50                                                         
    Warning: Shrinking a partition can cause data loss, are you sure you want to
    continue?
    Yes/No? yes                                                               
    
    (parted) print
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.0MB  49.0MB  primary

Removing partitions with parted

Now, let's look at how to remove the partition you created at /dev/sdc1 by using the rm command inside of the parted suite. Again, you will need the partition number, which is found in the print output.

NOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the rm command, it will delete the partition number you give it.

    (parted) rm 1                                                             
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start  End  Size  Type  File system  Flags

[May 06, 2020] How To Change Default Sudo Log File In Linux

May 04, 2020 | www.ostechnix.com
The sudo logs are kept in "/var/log/secure" file in RPM-based systems such as CentOS and Fedora.

To set a dedicated sudo log file in CentOS 8, edit "/etc/sudoers" file using command:

$ sudo visudo

This command will open /etc/sudoers file in Vi editor. Press "i" to enter to insert mode and add the following line at the end:

[...]
Defaults syslog=local1

Press ESC and type :wq to save and close.

Next, edit "/etc/rsyslog.conf" file:

$ sudo nano /etc/rsyslog.conf

Add/modify the following lines (line number 46 and 47):

[...]
*.info;mail.none;authpriv.none;cron.none;local1.none   /var/log/messages
local1.*                /var/log/sudo.log
[...]

Change Sudo Log File Location In CentOS

Press CTRL+X followed by Y to save and close the file.

Restart rsyslog to take effect the changes.

$ sudo systemctl restart rsyslog

From now on, all sudo attempts will be logged in /var/log/sudo.log file.

$ sudo cat /var/log/sudo.log

Sample output:

May 3 17:13:26 centos8 sudo[20191]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/systemctl restart rsyslog
May 3 17:13:35 centos8 sudo[20202]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/systemctl status rsyslog
May 3 17:13:51 centos8 sudo[20206]: ostechnix : TTY=pts/0 ; PWD=/home/ostechnix ; USER=root ; COMMAND=/bin/yum update

View sudo log files in CentOS

[Apr 28, 2020] The Meditations, by a Roman emperor who died in a plague named after him, has much to say about how to face fear, pain, anxiety and loss by Donald Robertson

Notable quotes:
"... First of all, because Stoics believe that our true good resides in our own character and actions, they would frequently remind themselves to distinguish between what's "up to us" and what isn't. Modern Stoics tend to call this "the dichotomy of control" and many people find this distinction alone helpful in alleviating stress. What happens to me is never directly under my control, never completely ..."
"... Marcus likes to ask himself, "What virtue has nature given me to deal with this situation?" That naturally leads to the question: "How do other people cope with similar challenges?" Stoics reflect on character strengths such as wisdom, patience and self-discipline, which potentially make them more resilient in the face of adversity. They try to exemplify these virtues and bring them to bear on the challenges they face in daily life, during a crisis like the pandemic. They learn from how other people cope. Even historical figures or fictional characters can serve as role models. ..."
"... fear does us more harm than the things of which we're afraid. ..."
"... Finally, during a pandemic, you may have to confront the risk, the possibility, of your own death. Since the day you were born, that's always been on the cards. Most of us find it easier to bury our heads in the sand. Avoidance is the No1 most popular coping strategy in the world. We live in denial of the self-evident fact that we all die eventually. ..."
"... "All that comes to pass", he tells himself, even illness and death, should be as "familiar as the rose in spring and the fruit in autumn". Marcus Aurelius, through decades of training in Stoicism, in other words, had taught himself to face death with the steady calm of someone who has done so countless times already in the past. ..."
Apr 25, 2020 | www.theguardian.com
T he Roman emperor Marcus Aurelius Antoninus was the last famous Stoic philosopher of antiquity. During the last 14 years of his life he faced one of the worst plagues in European history. The Antonine Plague, named after him, was probably caused by a strain of the smallpox virus. It's estimated to have killed up to 5 million people, possibly including Marcus himself.

ss="rich-link tone-feature--item rich-link--pillar-arts">

="rich-link__link u-faux-block-link__overlay" aria-label="'What it means to be an American': Abraham Lincoln and a nation divided" href="https://www.theguardian.com/books/2020/apr/11/abraham-lincoln-verge-book-ted-widmer-interview">

From AD166 to around AD180, repeated outbreaks occurred throughout the known world. Roman historians describe the legions being devastated, and entire towns and villages being depopulated and going to ruin. Rome itself was particularly badly affected, carts leaving the city each day piled high with dead bodies.

In the middle of this plague, Marcus wrote a book, known as The Meditations, which records the moral and psychological advice he gave himself at this time. He frequently applies Stoic philosophy to the challenges of coping with pain, illness, anxiety and loss. It's no stretch of the imagination to view The Meditations as a manual for developing precisely the mental resilience skills required to cope with a pandemic.

First of all, because Stoics believe that our true good resides in our own character and actions, they would frequently remind themselves to distinguish between what's "up to us" and what isn't. Modern Stoics tend to call this "the dichotomy of control" and many people find this distinction alone helpful in alleviating stress. What happens to me is never directly under my control, never completely up to me, but my own thoughts and actions are – at least the voluntary ones. The pandemic isn't really under my control but the way I behave in response to it is.

Much, if not all, of our thinking is also up to us. Hence, "It's not events that upset us but rather our opinions about them." More specifically, our judgment that something is really bad, awful or even catastrophic, causes our distress.

This is one of the basic psychological principles of Stoicism. It's also the basic premise of modern cognitive behavioral therapy (CBT), the leading evidence-based form of psychotherapy. The pioneers of CBT, Albert Ellis and Aaron T Beck, both describe Stoicism as the philosophical inspiration for their approach. It's not the virus that makes us afraid but rather our opinions about it. Nor is it the inconsiderate actions of others, those ignoring social distancing recommendations, that make us angry so much as our opinions about them.

Many people are struck, on reading The Meditations, by the fact that it opens with a chapter in which Marcus lists the qualities he most admires in other individuals, about 17 friends, members of his family and teachers. This is an extended example of one of the central practices of Stoicism.

Marcus likes to ask himself, "What virtue has nature given me to deal with this situation?" That naturally leads to the question: "How do other people cope with similar challenges?" Stoics reflect on character strengths such as wisdom, patience and self-discipline, which potentially make them more resilient in the face of adversity. They try to exemplify these virtues and bring them to bear on the challenges they face in daily life, during a crisis like the pandemic. They learn from how other people cope. Even historical figures or fictional characters can serve as role models.

With all of this in mind, it's easier to understand another common slogan of Stoicism: fear does us more harm than the things of which we're afraid. This applies to unhealthy emotions in general, which the Stoics term "passions" – from pathos , the source of our word "pathological". It's true, first of all, in a superficial sense. Even if you have a 99% chance, or more, of surviving the pandemic, worry and anxiety may be ruining your life and driving you crazy. In extreme cases some people may even take their own lives.

In that respect, it's easy to see how fear can do us more harm than the things of which we're afraid because it can impinge on our physical health and quality of life. However, this saying also has a deeper meaning for Stoics. The virus can only harm your body – the worst it can do is kill you. However, fear penetrates into the moral core of our being. It can destroy your humanity if you let it. For the Stoics that's a fate worse than death.

Finally, during a pandemic, you may have to confront the risk, the possibility, of your own death. Since the day you were born, that's always been on the cards. Most of us find it easier to bury our heads in the sand. Avoidance is the No1 most popular coping strategy in the world. We live in denial of the self-evident fact that we all die eventually. The Stoics believed that when we're confronted with our own mortality, and grasp its implications, that can change our perspective on life quite dramatically. Any one of us could die at any moment. Life doesn't go on forever.

We're told this was what Marcus was thinking about on his deathbed. According to one historian, his circle of friends were distraught. Marcus calmly asked why they were weeping for him when, in fact, they should accept both sickness and death as inevitable, part of nature and the common lot of mankind. He returns to this theme many times throughout The Meditations.

"All that comes to pass", he tells himself, even illness and death, should be as "familiar as the rose in spring and the fruit in autumn". Marcus Aurelius, through decades of training in Stoicism, in other words, had taught himself to face death with the steady calm of someone who has done so countless times already in the past.

Donald Robertson is cognitive behavioural therapist and the author of several books on philosophy and psychotherapy, including Stoicism and the Art of Happiness and How to Think Like a Roman Emperor: The Stoic Philosophy of Marcus Aurelius

[Apr 21, 2020] Real sysadmins don't sudo by David Both

Apr 17, 2020 | www.redhat.com
Or do they? This opinion piece from contributor David Both takes a look at when sudo makes sense, and when it does not. More Linux resources

A few months ago, I read a very interesting article that contained some good information about a Linux feature that I wanted to learn more about. I won't tell you the name of the article, what it was about, or even the web site on which I read it, but the article just made me shudder.

The reason I found this article so cringe-worthy is that it prefaced every command with the sudo command. The issue I have with this is that the article is allegedly for sysadmins, and real sysadmins don't use sudo in front of every command they issue. To do so is a gross misuse of the sudo command. I have written about this type of misuse in my book, "The Linux Philosophy for SysAdmins." The following is an excerpt from Chapter 19 of that book.

In this article, we explore why and how the sudo tool is being misused and how to bypass the configuration that forces one to use sudo instead of working directly as root.

sudo or not sudo

Part of being a system administrator and using your favorite tools is to use the tools we have correctly and to have them available without any restrictions. In this case, I find that the sudo command is used in a manner for which it was never intended. I have a particular dislike for how the sudo facility is being used in some distributions, especially because it is employed to limit and restrict access by people doing the work of system administration to the tools they need to perform their duties.

"[SysAdmins] don't use sudo."
– Paul Venezia

Venezia explains in his InfoWorld article that sudo is used as a crutch for sysadmins. He does not spend a lot of time defending this position or explaining it. He just states this as a fact. And I agree with him – for sysadmins. We don't need the training wheels in order to do our jobs. In fact, they get in the way.

Some distros, such as Ubuntu, use the sudo command in a manner that is intended to make the use of commands that require elevated (root) privileges a little more difficult. In these distros, it is not possible to login directly as the root user so the sudo command is used to allow non-root users temporary access to root privileges. This is supposed to make the user a little more careful about issuing commands that need elevated privileges such as adding and deleting users, deleting files that don't belong to them, installing new software, and generally all of the tasks that are required to administer a modern Linux host. Forcing sysadmins to use the sudo command as a preface to other commands is supposed to make working with Linux safer.

Using sudo in the manner it is by these distros is, in my opinion, a horrible and ineffective attempt to provide novice sysadmins with a false sense of security. It is completely ineffective at providing any level of protection. I can issue commands that are just as incorrect or damaging using sudo as I can when not using it. The distros that use sudo to anesthetize the sense of fear that we might issue an incorrect command are doing sysadmins a great disservice. There is no limit or restriction imposed by these distros on the commands that one might use with the sudo facility. There is no attempt to actually limit the damage that might be done by actually protecting the system from the users and the possibility that they might do something harmful – nor should there be.

So let's be clear about this -- these distributions expect the user to perform all of the tasks of system administration. They lull the users -- who are really System Administrators -- into thinking that they are somehow protected from the effects of doing anything bad because they must take this restrictive extra step to enter their own password in order to run the commands.

Bypass sudo

Distributions that work like this usually lock the password for the root user (Ubuntu is one of these distros). This way no one can login as root and start working unencumbered. Let's look at how this works and then how to bypass it.

Let me stipulate the setup here so that you can reproduce it if you wish. As an example, I installed Ubuntu 16.04 LTS1 in a VM using VirtualBox. During the installation, I created a non-root user, student, with a simple password for this experiment.

Login as the user student and open a terminal session. Let's look at the entry for root in the /etc/shadow file, which is where the encrypted passwords are stored.

student@machine1:~$ cat /etc/shadow
cat: /etc/shadow: Permission denied

Permission is denied so we cannot look at the /etc/shadow file . This is common to all distributions so that non-privileged users cannot see and access the encrypted passwords. That access would make it possible to use common hacking tools to crack those passwords so it is insecure to allow that.

Now let's try to su – to root.

student@machine1:~$ su -
Password:
su: Authentication failure

This attempt to use the su command to elevate our user to root privilege fails because the root account has no password and is locked out. Let's use sudo to look at the /etc/shadow file.

student@machine1:~$ sudo cat /etc/shadow
[sudo] password for student: <enter the user password>
root:!:17595:0:99999:7:::
<snip>
student:$6$tUB/y2dt$A5ML1UEdcL4tsGMiq3KOwfMkbtk3WecMroKN/:17597:0:99999:7:::
<snip>

I have truncated the results to only show the entry for the root and student users. I have also shortened the encrypted password so that the entry will fit on a single line. The fields are separated by colons ( : ) and the second field is the password. Notice that the password field for root is a "bang," known to the rest of the world as an exclamation point ( ! ). This indicates that the account is locked and that it cannot be used.

Now, all we need to do to use the root account as proper sysadmins is to set up a password for the root account.

student@machine1:~$ sudo su -
[sudo] password for student: <Enter password for student>
root@machine1:~# passwd root
Enter new UNIX password: <Enter new root password>
Retype new UNIX password: <Re-enter new root password>
passwd: password updated successfully
root@machine1:~#

Now we can login directly on a console as root or su – directly to root instead of having to use sudo for each command. Of course, we could just use sudo su – every time we want to login as root – but why bother?

Please do not misunderstand me. Distributions like Ubuntu and their up- and down-stream relatives are perfectly fine and I have used several of them over the years. When using Ubuntu and related distros, one of the first things I do is set a root password so that I can login directly as root.

Valid uses for sudo

The sudo facility does have its uses. The real intent of sudo is to enable the root user to delegate to one or two non-root users, access to one or two specific privileged commands that they need on a regular basis. The reasoning behind this is that of the lazy sysadmin; allowing the users access to a command or two that requires elevated privileges and that they use constantly, many times per day, saves the SysAdmin a lot of requests from the users and eliminates the wait time that the users would otherwise experience. But most non-root users should never have full root access, just to the few commands that they need.

I sometimes need non-root users to run programs that require root privileges. In cases like this, I set up one or two non-root users and authorize them to run that single command. The sudo facility also keeps a log of the user ID of each user that uses it. This might enable me to track down who made an error. That's all it does; it is not a magical protector.

The sudo facility was never intended to be used as a gateway for commands issued by a sysadmin. It cannot check the validity of the command. It does not check to see if the user is doing something stupid. It does not make the system safe from users who have access to all of the commands on the system even if it is through a gateway that forces them to say "please" – That was never its intended purpose.

"Unix never says please."
– Rob Pike

This quote about Unix is just as true about Linux as it is about Unix. We sysadmins login as root when we need to do work as root and we log out of our root sessions when we are done. Some days we stay logged in as root all day long but we always work as root when we need to. We never use sudo because it forces us to type more than necessary in order to run the commands we need to do our jobs. Neither Unix nor Linux asks us if we really want to do something, that is, it does not say "Please verify that you want to do this."

Yes, I dislike the way some distros use the sudo command. Next time I will explore some valid use cases for sudo and how to configure it for these cases.

[ Want to test your sysadmin skills? Take a skills assessment today. ]

[Apr 12, 2020] logging - Change log file name of teraterm log - Stack Overflow

Apr 12, 2020 | stackoverflow.com

Change log file name of teraterm log Ask Question Asked 4 years, 8 months ago Active 10 months ago Viewed 5k times


pmverma ,

I would like to change the default log file name of teraterm terminal log. What I would like to do automatically create/append log in a file name like "loggedinhost-teraterm.log"

I found following ini setting for log file. It also uses strftime to format log filename.

    ; Default Log file name. You can specify strftime format to here.
    LogDefaultName=teraterm "%d %b %Y" .log
    ; Default path to save the log file.
    LogDefaultPath=
    ; Auto start logging with default log file name.
    LogAutoStart=on

I have modified it to include date.

Is there any way to prefix hostname in logfile name

Fox eg,

myserver01-teraterm.log
myserver02-teraterm.logfile
myserver03-teraterm.log

Romme ,

I had the same issue, and was able to solve my problem by adding &h like below;

; Default Log file name. You can specify strftime format to here. LogDefaultName=teraterm &h %d %b %y.log ; Default path to save the log file. LogDefaultPath=C:\Users\Logs ; Auto start logging with default log file name. LogAutoStart=on

> ,

https://ttssh2.osdn.jp/manual/en/menu/setup-additional.html

"Log" tab

View log editor

Specify the editor that is used for display log file

Default log file name(strftime format)

Specify default log file name. It can include a format of strftime.

&h      Host name(or empty when not connecting)
&p      TCP port number(or empty when not connecting, not TCP connection)
&u      Logon user name
%a      Abbreviated weekday name
%A      Full weekday name
%b      Abbreviated month name
%B      Full month name
%c      Date and time representation appropriate for locale
%d      Day of month as decimal number (01 - 31)
%H      Hour in 24-hour format (00 - 23)
%I      Hour in 12-hour format (01 - 12)
%j      Day of year as decimal number (001 - 366)
%m      Month as decimal number (01 - 12)
%M      Minute as decimal number (00 -  59)
%p      Current locale's A.M./P.M. indicator for 12-hour clock
%S      Second as decimal number (00 - 59)
%U      Week of year as decimal number, with Sunday as first day of week (00 - 53)
%w      Weekday as decimal number (0 - 6; Sunday is 0)
%W      Week of year as decimal number, with Monday as first day of week (00 - 53)
%x      Date representation for current locale
%X      Time representation for current locale
%y      Year without century, as decimal number (00 - 99)
%Y      Year with century, as decimal number
%z, %Z  Either the time-zone name or time zone abbreviation, depending on registry settings;
    no characters if time zone is unknown
%%      Percent sign

example:

teraterm-&h-%Y%m%d_%H_%M_%S.log

[Apr 08, 2020] How to Use rsync and scp Commands in Reverse Mode on Linux

Highly recommended!
Apr 08, 2020 | www.2daygeek.com

by Magesh Maruthamuthu · Last Updated: April 2, 2020

Typically, you use the rsync command or scp command to copy files from one server to another.

But if you want to perform these commands in reverse mode, how do you do that?

Have you tried this? Have you had a chance to do this?

Why would you want to do that? Under what circumstances should you use it?

Scenario-1: When you copy a file from "Server-1" to "Server-2" , you must use the rsync or scp command in the standard way.

Also, you can do from "Server-2" to "Server-1" if you need to.

To do so, you must have a password for both systems.

Scenario-2: You have a jump server and only enabled the ssh key-based authentication to access other servers (you do not have the password for that).

In this case you are only allowed to access the servers from the jump server and you cannot access the jump server from other servers.

In this scenario, if you want to copy some files from other servers to the jump server, how do you do that?

Yes, you can do this using the reverse mode of the scp or rsync command.

General Syntax of the rsync and scp Command:

The following is a general syntax of the rsync and scp commands.

rsync: rsync [Options] [Source_Location] [Destination_Location]

scp: scp [Options] [Source_Location] [Destination_Location]
General syntax of the reverse rsync and scp command:

The general syntax of the reverse rsync and scp commands as follows.

rsync: rsync [Options] [Destination_Location] [Source_Location]

scp: scp [Options] [Destination_Location] [Source_Location]
1) How to Use rsync Command in Reverse Mode with Standard Port

We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse rsync command with the standard port.

me width=

me width=

# rsync -avz -e ssh root@jump.2daygeek.com:/root/2daygeek.tar.gz /root/backup
The authenticity of host 'jump.2daygeek.com (jump.2daygeek.com)' can't be established.
RSA key fingerprint is 6f:ad:07:15:65:bf:54:a6:8c:5f:c4:3b:99:e5:2d:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'jump.2daygeek.com' (RSA) to the list of known hosts.
root@82.165.133.65's password:
receiving file list ... done
2daygeek.tar.gz

sent 42 bytes  received 23134545 bytes  1186389.08 bytes/sec
total size is 23126674  speedup is 1.00

You can see the file copied using the ls command .

# ls -h /root/backup/*.tar.gz
total 125M
-rw-------   1 root root  23M Oct 26 01:00 2daygeek.tar.gz
2) How to Use rsync Command in Reverse Mode with Non-Standard Port

We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse rsync command with the non-standard port.

# rsync -avz -e "ssh -p 11021" root@jump.2daygeek.com:/root/backup/weekly/2daygeek.tar.gz /root/backup
The authenticity of host '[jump.2daygeek.com]:11021 ([jump.2daygeek.com]:11021)' can't be established.
RSA key fingerprint is 9c:ab:c0:5b:3b:44:80:e3:db:69:5b:22:ba:d6:f1:c9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[jump.2daygeek.com]:11021' (RSA) to the list of known hosts.
root@83.170.117.138's password:
receiving incremental file list
2daygeek.tar.gz

sent 30 bytes  received 23134526 bytes  1028202.49 bytes/sec
total size is 23126674  speedup is 1.00
3) How to Use scp Command in Reverse Mode on Linux

We will copy the "2daygeek.tar.gz" file from the "Remote Server" to the "Jump Server" using the reverse scp command.

# scp root@jump.2daygeek.com:/root/backup/weekly/2daygeek.tar.gz /root/backup

me width=

me width=

Share this:

Tags: file copy Linux Linux Backup Reverse rsync Reverse scp rsync Command Examples scp Command Examples

Magesh Maruthamuthu

[Mar 29, 2020] Why Didn't We Test Our Trade's 'Antifragility' Before COVID-19 by Gene Callahan and Joe Norman

Highly recommended!
Mar 28, 2020 | www.theamericanconservative.com

On April 21, 2011, the region of Amazon Web Services covering eastern North America crashed. The crash brought down the sites of large customers such as Quora, Foursquare, and Reddit. It took Amazon over a week to bring its system fully back online, and some customer data was lost permanently.

But one company whose site did not crash was Netflix. It turns out that Netflix had made themselves "antifragile" by employing software they called "Chaos Monkey," which regularly and randomly brought down Netflix servers. By continually crashing their own servers, Netflix learned how to nevertheless keep other portions of their network running. And so when Amazon US-East crashed, Netflix ran on, unfazed.

This phenomenon is discussed by Nassim Taleb in his book Antifragile : a system that depends on the absence of change is fragile. The companies that focused on keeping all of their servers up and running all the time went completely offline when Amazon crashed from under them. But the company that had exposed itself to lots of little crashes could handle the big crash. That is because the minor, "undesirable" changes stress the system in a way that can make it stronger.

The idea of antifragility does not apply only to computer networks. For instance, by trying to eliminate minor downturns in the economy, central bank policy can make that economy extremely vulnerable to a major recession. Running only on treadmills or tracks makes the joints extremely vulnerable when, say, one steps in a pothole in the sidewalk.

What does this have to do with trade policy? For many reasons, such as the recent coronavirus outbreak, flows of goods are subject to unexpected shocks.

Both a regime of "unfettered" free trade, and its opposite, that of complete autarchy, are fragile in the face of such shocks. A trade policy aimed not at complete free trade or protectionism, but at making an economy better at absorbing and adapting to rapid change, is more sane and salutary than either extreme. Furthermore, we suggest practicing for shocks can help make an economy antifragile.

Amongst academic economists, the pure free-trade position is more popular. The case for international trade, absent the artificial interference of government trade policy, is generally based upon the "principle of comparative advantage," first formulated by the English economist David Ricardo in the early 19th century. Ricardo pointed out, quite correctly, that even if, among two potential trading partners looking to trade a pair of goods, one of them is better at producing both of them, there still exist potential gains from trade -- so long as one of them is relatively better at producing one of the goods, and the other (as a consequence of this condition) relatively better at producing the other. For example, Lebron James may be better than his local house painter at playing basketball, and at painting houses, given his extreme athleticism and long reach. But he is so much more "better" at basketball that it can still make sense for him to concentrate on basketball and pay the painter to paint his house.

And so, per Ricardo, it is among nations: even if, say, Sweden can produce both cars and wool sweaters more efficiently than Scotland, if Scotland is relatively less bad at producing sweaters than cars, it still makes sense for Scotland to produce only wool sweaters, and trade with Sweden for the cars it needs.

When we take comparative advantage to its logical conclusion at the global scale, it suggests that each agent (say, nation) should focus on one major industry domestically and that no two agents should specialize in the same industry. To do so would be to sacrifice the supposed advantage of sourcing from the agent who is best positioned to produce a particular good, with no gain for anyone.

Good so far, but Ricardo's case contains two critical hidden assumptions: first, that the prices of the goods in question will remain more or less stable in the global marketplace, and second that the availability of imported goods from specialized producers will remain uninterrupted, such that sacrificing local capabilities for cheaper foreign alternatives.

So what happens in Scotland if the Swedes suddenly go crazy for yak hair sweaters (produced in Tibet) and are no longer interested in Scottish sweaters at all? The price of those sweaters crashes, and Scotland now finds itself with most of its productive capacity specialized in making a product that can only be sold at a loss.

Or what transpires if Scotland is no longer able, for whatever reason, to produce sweaters, but the Swedes need sweaters to keep warm? Swedes were perhaps once able to make their own sweaters, but have since funneled all their resources into making cars, and have even lost the knowledge of sweater-making. Now to keep warm, the Swedes have to rapidly build the infrastructure and workforce needed to make sweaters, and regain the knowledge of how to do so, as the Scots had not only been their sweater supplier, but the only global sweater supplier.

So we see that the case for extreme specialization, based on a first-order understanding of comparative advantage, collapses when faced with a second-order effect of a dramatic change in relative prices or conditions of supply.

That all may sound very theoretical, but collapses due to over-specialization, prompted by international agencies advising developing economies based on naive comparative-advantage analysis, have happened all too often. For instance, a number of African economies, persuaded to base their entire economy on a single good in which they had a comparative advantage (e.g, gold, cocoa, oil, or bauxite), saw their economies crash when the price of that commodity fell. People who had formerly been largely self-sufficient found themselves wage laborers for multinationals in good times, and dependents on foreign charity during bad times.

While the case for extreme specialization in production collapses merely by letting prices vary, it gets even worse for the "just specialize in the single thing you do best" folks once we add in considerations of pandemics, wars, extreme climate change, and other such shocks. We have just witnessed how relying on China for such a high percentage of our medical supplies and manufacturing has proven unwise when faced with an epidemic originating in China.

On a smaller scale, the great urban theorist Jane Jacobs stressed the need for economic diversity in a city if it is to flourish. Detroit's over-reliance on the automobile industry, and its subsequent collapse when that industry largely deserted it, is a prominent example of Jacobs' point. And while Detroit is perhaps the most famous example of a city collapsing due to over-specialization, it is far from the only one .

All of this suggests that trade policy, at any level, should have, as its primary goal, the encouragement of diversity in that level's economic activity. To embrace the extremes of "pure free trade" or "total self-sufficiency" is to become more susceptible to catastrophe from changing conditions. A region that can produce only a few goods is fragile in the face of an event, like the coronavirus, that disrupts the flow of outside goods. On the other hand, turning completely inward, and cutting the region off from the outside, leaves it without outside help when confronting a local disaster, like an extreme drought.

To be resilient as a social entity, whether a nation, region, city, or family, will have a diverse mix of internal and external resources it can draw upon for sustenance. Even for an individual, total specialization and complete autarchy are both bad bets. If your only skill is repairing Sony Walkmen, you were probably pretty busy in 2000, but by today you likely don't have much work. Complete individual autarchy isn't ever really even attempted: if you watch YouTube videos of supposedly "self-reliant" people in the wilderness, you will find them using axes, radios, saws, solar panels, pots and pans, shirts, shoes, tents, and many more goods produced by others.

In the technical literature, having such diversity at multiple scales is referred to as "multiscale variety." In a system that displays multiscale variety, no single scale accounts for all of the diversity of behavior in the system. The practical importance of this is related to the fact that shocks themselves come at different scales. Some shocks might be limited to a town or a region, for instance local weather events, while others can be much more widespread, such as the coronavirus pandemic we are currently facing.

A system with multiscale variety is able to respond to shocks at the scale at which they occur: if one region experiences a drought while a neighboring region does not, agricultural supplementation from the currently abundant region can be leveraged. At a smaller scale, if one field of potatoes becomes infested with a pest, while the adjacent cows in pasture are spared, the family who owns the farm will still be able to feed themselves and supply products to the market.

Understanding this, the question becomes how can trade policy, conceived broadly, promote the necessary variety and resiliency to mitigate and thrive in the face of the unexpected? Crucially, we should learn from the tech companies: practice disconnecting, and do it randomly. In our view there are two important components to the intentional disruption: (1) it is regular enough to generate "muscle memory" type responses; and (2) it is random enough that responses are not "overfit" to particular scenarios.

For an individual or family, implementing such a policy might create some hardships, but there are few institutional barriers to doing so. One week, simply declare, "Let's pretend all of the grocery stores are empty, and try getting by only on what we can produce in the yard or have stockpiled in our house!" On another occasion, perhaps, see if you can keep your house warm for a few days without input from utility companies.

Businesses are also largely free of institutional barriers to practicing disconnecting. A company can simply say, "We are awfully dependent on supplier X: this week, we are not going to order from them, and let's see what we can do instead!" A business can also seek out external alternatives to over-reliance on crucial internal resources: for instance, if your top tech guy can hold your business hostage, it is a good idea to find an outside consulting firm that could potentially fill his role.

When we get up to the scale of the nation, things become (at least institutionally) trickier. If Freedonia suddenly bans the import of goods from Ruritania, even for a week, Ruritania is likely to regard this as a "trade war," and may very well go to the WTO and seek relief. However, the point of this reorientation of trade policy is not to promote hostility to other countries, but to make one's own country more resilient. A possible solution to this problem is that a national government could periodically, at random times, buy all of the imports of some good from some other country, and stockpile them. Then the foreign supplier would have no cause for complaint: its goods are still being purchased! But domestic manufacturers would have to learn to adjust to a disappearance of the supply of palm oil from Indonesia, or tin from China, or oil from Norway.

Critics will complain that such government management of trade flows, even with the noble aim of rendering an economy antifragile, will inevitably be turned to less pure purposes, like protecting politically powerful industrialists. But so what? It is not as though the pursuit of free trade hasn't itself yielded perverse outcomes, such as the NAFTA trade agreement that ran to over one thousand pages. Any good aim is likely to suffer diversion as it passes through the rough-and-tumble of political reality. Thus, we might as well set our sites on an ideal policy, even though it won't be perfectly realized.

We must learn to deal with disruptions when success is not critical to survival. The better we become at responding to unexpected shocks, the lower the cost will be each time we face an event beyond our control that demands an adaptive response. To wait until adaptation is necessary makes us fragile when a real crisis appears. We should begin to develop an antifragile economy today, by causing our own disruptions and learning to overcome them. Deliberately disrupting our own economy may sound crazy. But then, so did deliberately crashing one's own servers, until Chaos Monkey proved that it works.

Gene Callahan teaches at the Tandon School of Engineering at New York University. Joe Norman is a data scientist and researcher at the New England Complex Systems Institute.

My Gana 20 hours ago
Most disruptive force is own demographic change of which govts have known for decades. Caronovirus challenge is nothing compared to what will happen because US ed system discriminated against the poor who will be the majority!
PierrePaul 12 hours ago
What Winston Churchill once said about the Americans is in fact true of all humans: "Americans always end up doing
the right thing once they have exhausted all other options". That's just as true of the French (I write from France) since our government stopped stocking a strategic reserve of a billion breathing-masks in 2013 because "we could buy them in Chine for a lower costs". Now we can't produce enough masks even for our hospitals.

[Mar 23, 2020] Pscp - Transfer-Copy Files to Multiple Linux Servers Using Single Shell by Ravi Saive

Dec 05, 2015 | www.tecmint.com

Pscp utility allows you to transfer/copy files to multiple remote Linux servers using single terminal with one single command, this tool is a part of Pssh (Parallel SSH Tools), which provides parallel versions of OpenSSH and other similar tools such as:

  1. pscp – is utility for copying files in parallel to a number of hosts.
  2. prsync – is a utility for efficiently copying files to multiple hosts in parallel.
  3. pnuke – it helps to kills processes on multiple remote hosts in parallel.
  4. pslurp – it helps to copy files from multiple remote hosts to a central host in parallel.
When working in a network environment where there are multiple hosts on the network, a System Administrator may find these tools listed above very useful. When working in a network environment where there are multiple hosts on the network, a System Administrator may find these tools listed above very useful.

Pscp – Copy Files to Multiple Linux Servers In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. Pscp – Copy Files to Multiple Linux Servers In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article. To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article.

  1. How to Install Pssh Tool to Execute Commands on Multiple Linux Servers
Almost all the different options used with these tools are the same except for few that are related to the specific functionality of a given utility. Almost all the different options used with these tools are the same except for few that are related to the specific functionality of a given utility. How to Use Pscp to Transfer/Copy Files to Multiple Linux Servers While using pscp you need to create a separate file that includes the number of Linux server IP address and SSH port number that you need to connect to the server. While using pscp you need to create a separate file that includes the number of Linux server IP address and SSH port number that you need to connect to the server. Copy Files to Multiple Linux Servers Let's create a new file called " myscphosts.txt " and add the list of Linux hosts IP address and SSH port (default 22 ) number as shown. Let's create a new file called " myscphosts.txt " and add the list of Linux hosts IP address and SSH port (default 22 ) number as shown.
192.168.0.3:22
192.168.0.9:22
Once you've added hosts to the file, it's time to copy files from local machine to multiple Linux hosts under /tmp directory with the help of following command. Once you've added hosts to the file, it's time to copy files from local machine to multiple Linux hosts under /tmp directory with the help of following command.
# pscp -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
OR
# pscp.pssh -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
Sample Output
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22
Explanation about the options used in the above command. Explanation about the options used in the above command.
  1. -h switch used to read a hosts from a given file and location.
  2. -l switch reads a default username on all hosts that do not define a specific user.
  3. -A switch tells pscp ask for a password and send to ssh.
  4. -v switch is used to run pscp in verbose mode.
Copy Directories to Multiple Linux Servers If you want to copy entire directory use -r option, which will recursively copy entire directories as shown. If you want to copy entire directory use -r option, which will recursively copy entire directories as shown.
# pscp -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
OR
# pscp.pssh -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
Sample Output
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22

You can view the manual entry page for the pscp or use pscp --help command to seek for help.

  1. Ashwini R says: January 24, 2019 at 7:13 pm

    It didn't work for me as well. I can get into the machine through same ip and port as I've inserted into hosts.txt file. Still i get the below messages:

    [root@node1 ~]# pscp -h myscphosts.txt root -Av LoadKafkaRN.jar /home/
    [1] 13:37:42 [FAILURE] 173.37.29.85:22 Exited with error code 1
    [2] 13:37:42 [FAILURE] 173.37.29.2:22 Exited with error code 1
    [3] 13:37:42 [FAILURE] 173.37.28.176:22 Exited with error code 1
    [4] 13:37:42 [FAILURE] 173.37.28.121:22 Exited with error code 1
    
  2. Ankit Tiwari says: November 28, 2016 at 11:26 am

    Hi,

    I am following this tutorial to copy a file to multiple system but its giving error. The code i am using is

    pscp -h myhost.txt -l zabbix -Av show-image-1920×1080.jpg /home/zabbix/

    but it gives error

    [1] 11:18:50 [FAILURE] 192.168.0.244:22 Exited with error code 1

    • Ravi Saive says: November 28, 2016 at 1:00 pm

      @Ankit,

      Have you placed correct remote SSH host IP address and port number in the myscphosts.txt file? please confirm and add correct values and then try again..

    jHz says: September 7, 2016 at 7:18 am

    Hi,

    I am trying to copy one file from 30 hosts to one central computer by following you article but no success.
    I am using pscp command for this purpose:

    pscp -h hosts.txt /camera/1.jpg /camera/1.jpg

    where camera directory has been created already in which 1.jpg exists. It always give me error:

    Exited with error code 1

    I have also tried pscp command to copy file from one host to server:

    pscp -H "192.168.0.101" /camera/1.jpg /camera/1.jpg

    but it also returned me with the same error.

    Any help will be much appreciated.
    Thanks in advance.

[Mar 23, 2020] Copy Specific File Types While Keeping Directory Structure In Linux by sk

I think this approach is way too complex. A simpler and more reliable approach is first to create directory structure and then as the second statge to copy files.
Use of cp command optionis interesting though
Notable quotes:
"... create the intermediate parent directories if needed to preserve the parent directory structure. ..."
Mar 19, 2020 | www.ostechnix.com

[Mar 23, 2020] How to setup nrpe for client side monitoring - LinuxConfig.org

Mar 23, 2020 | linuxconfig.org

In this tutorial you will learn:

[Mar 12, 2020] 7 tips to speed up your Linux command line navigation Enable Sysadmin

Mar 12, 2020 | www.redhat.com

A bonus shortcut

You can use the keyboard combination, Alt+. , to repeat the last argument.

Note: The shortcut is Alt+. (dot).

$ mkdir /path/to/mydir

$ cd Alt.

You are now in the /path/to/mydir directory.

[Mar 05, 2020] Using Ctags with MC

Mar 05, 2020 | frankhesse.wordpress.com

the Midnight Commander's built-in editor turned out to be. Below is one of the features of mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through the code.

Code Navigation
Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility, for this you need to run the following commands:

$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -

or
$ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e -L-

')

me marginwidth=


After the utility completes, a TAGS file will appear in the root directory of our project, which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the functions of variables or properties of the object under study.

Using
Imagine that we need to determine the place where the definition of the locked property of an edit object is located in some source code of a rather large project.


/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}

me marginwidth=

To do this, put the cursor at the end of the word locked and press alt + enter , a list of possible options appears, as in the screenshot below.
image

After selecting the desired option, we get to the line with the definition.

[Mar 05, 2020] How to switch the editor in mc (midnight commander) from nano to mcedit?

Jan 01, 2014 | askubuntu.com

Ask Question Asked 9 years, 2 months ago Active 6 months ago Viewed 123k times

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html


sdu ,

Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the internal mc editor (mcedit)?

Isaiah ,

Press the following keys in order, one at a time:
  1. F9 Activates the top menu.
  2. o Selects the Option menu.
  3. c Opens the configuration dialog.
  4. i Toggles the use internal edit option.
  5. s Saves your preferences.

Hurnst , 2014-06-21 02:34:51

Run MC as usual. On the command line right above the bottom row of menu selections type select-editor . This should open a menu with a list of all of your installed editors. This is working for me on all my current linux machines.

, 2010-12-09 18:07:18

You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor

You will get an list of the installed editors on your system, and you can chose your favorite.

AntonioK , 2015-01-27 07:06:33

If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit

> ,

Open Midnight Commander, go to Options -> Configuration and check "use internal editor" Hit save and you are done.

[Mar 05, 2020] How to change your hostname in Linux Enable Sysadmin

Notable quotes:
"... pretty ..."
"... transient ..."
"... Want to try out Red Hat Enterprise Linux? Download it now for free. ..."
Mar 05, 2020 | www.redhat.com

How to change your hostname in Linux What's in a name, you ask? Everything. It's how other systems, services, and users "see" your system.

Posted March 3, 2020 | by Tyler Carrigan (Red Hat)

Image
Image by Pixabay
More Linux resources

Your hostname is a vital piece of system information that you need to keep track of as a system administrator. Hostnames are the designations by which we separate systems into easily recognizable assets. This information is especially important to make a note of when working on a remotely managed system. I have experienced multiple instances of companies changing the hostnames or IPs of storage servers and then wondering why their data replication broke. There are many ways to change your hostname in Linux; however, in this article, I'll focus on changing your name as viewed by the network (specifically in Red Hat Enterprise Linux and Fedora).

Background

A quick bit of background. Before the invention of DNS, your computer's hostname was managed through the HOSTS file located at /etc/hosts . Anytime that a new computer was connected to your local network, all other computers on the network needed to add the new machine into the /etc/hosts file in order to communicate over the network. As this method did not scale with the transition into the world wide web era, DNS was a clear way forward. With DNS configured, your systems are smart enough to translate unique IPs into hostnames and back again, ensuring that there is little confusion in web communications.

Modern Linux systems have three different types of hostnames configured. To minimize confusion, I list them here and provide basic information on each as well as a personal best practice:

It is recommended to pick a pretty hostname that is unique and not easily confused with other systems. Allow the transient and static names to be variations on the pretty, and you will be good to go in most circumstances.

Working with hostnames

Now, let's look at how to view your current hostname. The most basic command used to see this information is hostname -f . This command displays the system's fully qualified domain name (FQDN). To relate back to the three types of hostnames, this is your transient hostname. A better way, at least in terms of the information provided, is to use the systemd command hostnamectl to view your transient hostname and other system information:

Image

Before moving on from the hostname command, I'll show you how to use it to change your transient hostname. Using hostname <x> (where x is the new hostname), you can change your network name quickly, but be careful. I once changed the hostname of a customer's server by accident while trying to view it. That was a small but painful error that I overlooked for several hours. You can see that process below:

Image

It is also possible to use the hostnamectl command to change your hostname. This command, in conjunction with the right flags, can be used to alter all three types of hostnames. As stated previously, for the purposes of this article, our focus is on the transient hostname. The command and its output look something like this:

Image

The final method to look at is the sysctl command. This command allows you to change the kernel parameter for your transient name without having to reboot the system. That method looks something like this:

Image GNOME tip

Using GNOME, you can go to Settings -> Details to view and change the static and pretty hostnames. See below:

Image Wrapping up

I hope that you found this information useful as a quick and easy way to manipulate your machine's network-visible hostname. Remember to always be careful when changing system hostnames, especially in enterprise environments, and to document changes as they are made.

Want to try out Red Hat Enterprise Linux? Download it now for free. Topics: Linux Tyler Carrigan Tyler is a community manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. More about me

[Mar 05, 2020] Micro data center

Mar 05, 2020 | en.wikipedia.org

A micro data center ( MDC ) is a smaller or containerized (modular) data center architecture that is designed for computer workloads not requiring traditional facilities. Whereas the size may vary from rack to container, a micro data center may include fewer than four servers in a single 19-inch rack. It may come with built-in security systems, cooling systems, and fire protection. Typically there are standalone rack-level systems containing all the components of a 'traditional' data center, [1] including in-rack cooling, power supply, power backup, security, fire and suppression. Designs exist where energy is conserved by means of temperature chaining , in combination with liquid cooling. [2]

In mid-2017, technology introduced by the DOME project was demonstrated enabling 64 high-performance servers, storage, networking, power and cooling to be integrated in a 2U 19" rack-unit. This packaging, sometimes called 'datacenter-in-a-box' allows deployments in spaces where traditional data centers do not fit, such as factory floors ( IOT ) and dense city centers, especially for edge-computing and edge-analytics.

MDCs are typically portable and provide plug and play features. They can be rapidly deployed indoors or outdoors, in remote locations, for a branch office, or for temporary use in high-risk zones. [3] They enable distributed workloads , minimizing downtime and increasing speed of response.

[Mar 05, 2020] What's next for data centers Think micro data centers by Larry Dignan

Apr 14, 2019 | www.zdnet.com

A micro data center, a mini version of a data center rack, could work as edge computing takes hold in various industries. Here's a look at the moving parts behind the micro data center concept.

[Mar 05, 2020] The 3-2-1 rule for backups says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site

Mar 05, 2020 | www.networkworld.com

As the number of places where we store data increases, the basic concept of what is referred to as the 3-2-1 rule often gets forgotten. This is a problem, because the 3-2-1 rule is easily one of the most foundational concepts for designing . It's important to understand why the rule was created, and how it's currently being interpreted in an increasingly tapeless world.

What is the 3-2-1 rule for backup?

The 3-2-1 rule says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site. Let's take a look at each of the three elements and what it addresses.

Mind the air gap

An air gap is a way of securing a copy of data by placing it on a machine on a network that is physically separate from the data it is backing up. It literally means there is a gap of air between the primary and the backup. This air gap accomplishes more than simple disaster recovery; it is also very useful for protecting against hackers.

If all backups are accessible via the same computers that might be attacked, it is possible that a hacker could use a compromised server to attack your backup server. By separating the backup from the primary via an air gap, you make it harder for a hacker to pull that off. It's still not impossible, just harder.

Everyone wants an air gap. The discussion these days is how to accomplish an air gap without using tapes.Back in the days of tape backup, it was easy to provide an air gap. You made a backup copy of your data and put it in a box, then you handed it to an Iron Mountain driver. Instantly, there was a gap of air between your primary and your backup. It was close to impossible for a hacker to attack both the primary and the backup.

That is not to say it was impossible; it just made it harder. For hackers to attack your secondary copy, they needed to resort to a physical attack via social engineering. You might think that tapes stored in an off-site storage facility would be impervious to a physical attack via social engineering, but that is definitely not the case. (I have personally participated in white hat attacks of off-site storage facilities, successfully penetrated them and been left unattended with other people's backups.) Most hackers don't resort to physical attacks because they are just too risky, so air-gapping backups greatly reduces the risk that they will be compromised.

Faulty 3-2-1 implementations

Many things that pass for backup systems now do not pass even the most liberal interpretation of the 3-2-1 rule. A perfect example of this would be various cloud-based services that store the backups on the same servers and the same storage facility that they are protecting, ignoring the "2" and the "1" in this important rule.

[Mar 05, 2020] Cloud computing More costly, complicated and frustrating than expected by Daphne Leprince-Ringuet

Highly recommended!
Costs estimate in optimistic spreadsheets and cost in actual life for large scale moves tot he cloud are very different. Now companies that jumped into cloud bandwagon discover that saving are illusionary and control over infrastructure is difficult. As well as cloud provider now control their future.
Notable quotes:
"... On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises. ..."
"... Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. ..."
"... The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation. ..."
"... Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs. ..."
"... As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ..."
Feb 27, 2020 | www.zdnet.com

Cloud computing More costly, complicated and frustrating than expected - but still essential ZDNet

A new report by Capita shows that UK businesses are growing disillusioned by their move to the cloud. It might be because they are focusing too much on the wrong goals. Migrating to the cloud seems to be on every CIO's to-do list these days. But despite the hype, almost 60% of UK businesses think that cloud has over-promised and under-delivered, according to a report commissioned by consulting company Capita.

The research surveyed 200 IT decision-makers in the UK, and found that an overwhelming nine in ten respondents admitted that cloud migration has been delayed in their organisation due to "unforeseen factors".

On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises.

But with organisations setting aside only one year to prepare for migration, which the report described as "less than adequate planning time," it is no surprise that most companies have encountered stumbling blocks on their journey to the cloud.

Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. "

Four years later, in fact, less than half (45%) of the companies' workloads and applications have successfully migrated, according to Capita. A meager 5% of respondents reported that they had not experienced any challenge in cloud migration; but their fellow IT leaders blamed security issues and the lack of internal skills as the main obstacles they have had to tackle so far.

Half of respondents said that they had to re-architect more workloads than expected to optimise them for the cloud. Afghan noted that many businesses have adopted a "lift and shift" approach, taking everything they were storing on premises and shifting it into the public cloud. "Except in some cases, you need to re-architect the application," said Afghan, "and now it's catching up with organisations."

The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation.

Infrastructure, however, is not the only cost of moving to the cloud. IDC analysed the overall spending on cloud services, and predicted that investments will reach $500 billion (£388.4 billion) globally by 2023. Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs.

Afghan said: "From speaking to clients, it is pretty clear that cloud expense is one of their chief concerns. The main thing on their minds right now is how to control that spend." His response to them, he continued, is better planning. "If you decide to move an application in the cloud, make sure you architect it so that you get the best return on investment," he argued. "And then monitor it. The cloud is dynamic - it's not a one-off event."

Capita's research did found that IT leaders still have faith in the cloud, with the majority (86%) of respondents agreeing that the benefits of the cloud will outweigh its downsides. But on the other hand, only a third of organisations said that labour and logistical costs have decreased since migrating; and a minority (16%) said they were "extremely satisfied" with the move.

"Most organisations have not yet seen the full benefits or transformative potential of their cloud investments," noted the report.

As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ...

Cloud Cloud computing: Spending is breaking records, Microsoft Azure slowly closes the gap on AWS

[Mar 05, 2020] How to tell if you're using a bash builtin in Linux

Mar 05, 2020 | www.networkworld.com

One quick way to determine whether the command you are using is a bash built-in or not is to use the command "command". Yes, the command is called "command". Try it with a -V (capital V) option like this:

$ command -V command
command is a shell builtin
$ command -V echo
echo is a shell builtin
$ command -V date
date is hashed (/bin/date)

When you see a "command is hashed" message like the one above, that means that the command has been put into a hash table for quicker lookup.

... ... ... How to tell what shell you're currently using

If you switch shells you can't depend on $SHELL to tell you what shell you're currently using because $SHELL is just an environment variable that is set when you log in and doesn't necessarily reflect your current shell. Try ps -p $$ instead as shown in these examples:

$ ps -p $$
  PID TTY          TIME CMD
18340 pts/0    00:00:00 bash    <==
$ /bin/dash
$ ps -p $$
  PID TTY          TIME CMD
19517 pts/0    00:00:00 dash    <==

Built-ins are extremely useful and give each shell a lot of its character. If you use some particular shell all of the time, it's easy to lose track of which commands are part of your shell and which are not.

Differentiating a shell built-in from a Linux executable requires only a little extra effort.

[Mar 05, 2020] Bash IDE - Visual Studio Marketplace

Notable quotes:
"... all your shell scripts ..."
Mar 05, 2020 | marketplace.visualstudio.com
Bash IDE

Visual Studio Code extension utilizing the bash language server , that is based on Tree Sitter and its grammar for Bash and supports explainshell integration.

Features Configuration

To get documentation for flags on hover (thanks to explainshell), run the explainshell Docker container :

docker run --rm --name bash-explainshell -p 5000:5000 chrismwendt/codeintel-bash-with-explainshell

And add this to your VS Code settings:

    "bashIde.explainshellEndpoint": "http://localhost:5000",

For security reasons, it defaults to "" , which disables explainshell integration. When set, this extension will send requests to the endpoint and displays documentation for flags.

Once https://github.com/idank/explainshell/pull/125 is merged, it would be possible to set this to "https://explainshell.com" , however doing this is not recommended as it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running Docker image.

[Mar 04, 2020] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Jan 01, 2019 | stackoverflow.com

A command-line HTML pretty-printer: Making messy HTML readable [closed] Ask Question Asked 10 years, 1 month ago Active 10 months ago Viewed 51k times


knorv ,

Closed. This question is off-topic . It is not currently accepting answers.

jonjbar ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

tidy inputfile.html

Paul Brit ,

Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly installed.
brew install tidy-html5

Original reply:

Tidy from OS X doesn't support HTML5 . But there is experimental branch on Github which does.

To get it:

 brew tap homebrew/dupes
 brew install tidy --HEAD
 brew untap homebrew/dupes

That's it! Have fun!

Boris , 2019-11-16 01:27:35

Error: No available formula with the name "tidy" . brew install tidy-html5 works. – Pysis Apr 4 '17 at 13:34

[Feb 29, 2020] files - How to get over device or resource busy

Jan 01, 2011 | unix.stackexchange.com

ripper234 , 2011-04-13 08:51:26

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

camh , 2011-04-13 09:22:46

The tool you want is lsof , which stands for list open files .

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path , so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

kip2 , 2014-04-03 01:24:22

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

BillThor ,

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

user73011 ,

Here is the solution:
  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.

Choylton B. Higginbottom ,

I had this same issue, built a one-liner starting with @camh recommendation:
lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs kill -9

The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used -9 on kill, others might have safer options.

user5359531 ,

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

gloriphobia , 2017-03-23 12:56:22

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

Hopefully this can help someone else who comes across this problem!

bil , 2018-04-04 14:10:20

Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this
ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it.

Prabhat Kumar Singh , 2017-08-01 08:07:36

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information

[Feb 28, 2020] linux - Convert a time span in seconds to formatted time in shell - Stack Overflow

Jan 01, 2012 | stackoverflow.com

Convert a time span in seconds to formatted time in shell Ask Question Asked 7 years, 3 months ago Active 2 years ago Viewed 43k times


Darren , 2012-11-16 18:59:53

I have a variable of $i which is seconds in a shell script, and I am trying to convert it to 24 HOUR HH:MM:SS. Is this possible in shell?

sampson-chen , 2012-11-16 19:17:51

Here's a fun hacky way to do exactly what you are looking for =)
date -u -d @${i} +"%T"

Explanation:

glenn jackman ,

Another approach: arithmetic
i=6789
((sec=i%60, i/=60, min=i%60, hrs=i/60))
timestamp=$(printf "%d:%02d:%02d" $hrs $min $sec)
echo $timestamp

produces 1:53:09

Alan Tam , 2014-02-17 06:48:21

The -d argument applies to date from coreutils (Linux) only.

In BSD/OS X, use

date -u -r $i +%T

kossboss , 2015-01-07 13:43:36

Here is my algo/script helpers on my site: http://ram.kossboss.com/seconds-to-split-time-convert/ I used this elogant algo from here: Convert seconds to hours, minutes, seconds
convertsecs() {
 ((h=${1}/3600))
 ((m=(${1}%3600)/60))
 ((s=${1}%60))
 printf "%02d:%02d:%02d\n" $h $m $s
}
TIME1="36"
TIME2="1036"
TIME3="91925"

echo $(convertsecs $TIME1)
echo $(convertsecs $TIME2)
echo $(convertsecs $TIME3)

Example of my second to day, hour, minute, second converter:

# convert seconds to day-hour:min:sec
convertsecs2dhms() {
 ((d=${1}/(60*60*24)))
 ((h=(${1}%(60*60*24))/(60*60)))
 ((m=(${1}%(60*60))/60))
 ((s=${1}%60))
 printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
 # PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
 # printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with         backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`

# OUTPUT WOULD BE LIKE THIS (If none pretty printf used): 
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used): 
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s

Basile Starynkevitch ,

If $i represents some date in second since the Epoch, you could display it with
  date -u -d @$i +%H:%M:%S

but you seems to suppose that $i is an interval (e.g. some duration) not a date, and then I don't understand what you want.

Shilv , 2016-11-24 09:18:57

I use C shell, like this:
#! /bin/csh -f

set begDate_r = `date +%s`
set endDate_r = `date +%s`

set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`

echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases date -d @"$i" +%T

[Feb 22, 2020] How To Use Rsync to Sync Local and Remote Directories on a VPS by Justin Ellingwood

Feb 22, 2020 | www.digitalocean.com

... ... ...

Useful Options for Rsync


Rsync provides many options for altering the default behavior of the utility. We have already discussed some of the more necessary flags.

If you are transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z option:

[Feb 18, 2020] Automation Armageddon: a Legitimate Worry? reviewed the history of automation, focused on projections of gloom-and-doom by Michael Olenick

Relatively simple automation often beat more complex system. By far.
Notable quotes:
"... My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators. ..."
"... I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell. ..."
"... One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active. ..."
"... "Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine. ..."
"... You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. ..."
"... I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization. ..."
"... I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV. ..."
"... Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question. ..."
"... The Ten Commandments do not apply to corporations. ..."
"... But what happens when the bread machine is connected to the internet, can't function without an active internet connection, and requires an annual subscription to use? ..."
"... Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits. ..."
"... When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot". ..."
"... The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate. ..."
"... And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble. ..."
Oct 25, 2019 | www.nakedcapitalism.com

By Michael Olenick, a research fellow at INSEAD who writes regularly at Olen on Economics and Innowiki . Originally published at Innowiki

Part I , "Automation Armageddon: a Legitimate Worry?" reviewed the history of automation, focused on projections of gloom-and-doom.

"It smells like death," is how a friend of mine described a nearby chain grocery store. He tends to exaggerate and visiting France admittedly brings about strong feelings of passion. Anyway, the only reason we go there is for things like foil or plastic bags that aren't available at any of the smaller stores.

Before getting to why that matters – and, yes, it does matter – first a tasty digression.

I live in a French village. To the French, high-quality food is a vital component to good life.

My daughter counts eight independent bakeries on the short drive between home and school. Most are owned by a couple of people. Counting high-quality bakeries embedded in grocery stores would add a few more. Going out of our way more than a minute or two would more than double that number.

Typical Bakery: Bread is cooked at least twice daily

Despite so many, the bakeries seem to do well. In the half-decade I've been here, three new ones opened and none of the old ones closed. They all seem to be busy. Bakeries are normally owner operated. The busiest might employ a few people but many are mom-and-pop operations with him baking and her selling. To remain economically viable, they rely on a dance of people and robots. Flour arrives in sacks with high-quality grains milled by machines. People measure ingredients, with each bakery using slightly different recipes. A human-fed robot mixes and kneads the ingredients into the dough. Some kind of machine churns the lumps of dough into baguettes.

https://www.youtube.com/embed/O22jWIjcdaY?feature=oembed


Baguette Forming Machine: This would make a good animated GIF

The baker places the formed baguettes onto baking trays then puts them in the oven. Big ovens maintain a steady temperature while timers keep track of how long various loaves of bread have been baking. Despite the sensors, bakers make the final decision when to pull the loaves out, with some preferring a bien cuit more cooked flavor and others a softer crust. Finally, a person uses a robot in the form of a cash register to ring up transactions and processes payments, either by cash or card.

Nobody -- not the owners, workers, or customers -- think twice about any of this. I doubt most people realize how much automation technology is involved or even that much of the equipment is automation tech. There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost. The baguette forming machines churn out exactly what a person would do by hand, only faster and at a far lower cost. We take the thermostatically controlled ovens for granted. However, for anybody who has tried to cook over wood controlling heat via air and fuel, thermostatically controlled ovens are clearly automation technology.

Is the cash register really a robot? James Ritty, who invented it, didn't think so; he sold the patent for cheap. The person who bought the patent built it into NCR, a seminal company laying the groundwork of the modern computer revolution.

Would these bakeries be financially viable if forced to do all this by hand? Probably not. They'd be forced to produce less output at higher cost; many would likely fail. Bread would cost more leaving less money for other purchases. Fewer jobs, less consumer spending power, and hungry bellies to boot; that doesn't sound like good public policy.

Getting back to the grocery store my friend thinks smells like death; just a few weeks ago they started using robots in a new and, to many, not especially welcome way.

As any tourist knows, most stores in France are closed on Sunday afternoons, including and especially grocery stores. That's part of French labor law: grocery stores must close Sunday afternoons. Except that the chain grocery store near me announced they are opening Sunday afternoon. How? Robots, and sleight-of-hand. Grocers may not work on Sunday afternoons but guards are allowed.

Not my store but similar.

Dimanche means Sunday. Aprés-midi means afternoon.

I stopped in to get a feel for how the system works. Instead of grocers, the store uses security guards and self-checkout kiosks.

When you step inside, a guard reminds you there are no grocers. Nobody restocks the shelves but, presumably for half a day, it doesn't matter. On Sunday afternoons, in place of a bored-looking person wearing a store uniform and overseeing the robo-checkout kiosks sits a bored-looking person wearing a security guard uniform doing the same. There are no human-assisted checkout lanes open but this store seldom has more than one operating anyway.

I have no idea how long the French government will allow this loophole to continue. I thought it might attract yellow vest protestors or at least a cranky store worker – maybe a few locals annoyed at an ancient tradition being buried – but there was nobody complaining. There were hardly any customers, either.

The use of robots to sidestep labor law and replace people, in one of the most labor-friendly countries in the world, produced a big yawn.

Paul Krugman and Matt Stoller argue convincingly that it's the bosses, not the robots, that crush the spirits and souls of workers. Krugman calls it "automation obsession" and Stoller points out predictions of robo-Armageddon have existed for decades. The well over 100+ examples I have of major automation-tech ultimately led to more jobs, not fewer.

Jerry Yang envisions some type of forthcoming automation-induced dystopia. Zuck and the tech-bros argue for a forthcoming Star Trek style robo-utopia.

My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators.

It's an admittedly blander vision of the future; neither utopian nor dystopian, at least not one fueled by automation tech. However, it's a vision supported by the historic adoption of automation technology.


The Rev Kev , October 25, 2019 at 10:46 am

I have no real disagreement with a lot of automation. But how it is done is another matter altogether. Using the main example in this article, Australia is probably like a lot of countries with bread in that most of the loaves that you get in a supermarket are typically bland and come in plastic bags but which are cheap. You only really know what you grow up with.

When I first went to Germany I stepped into a Bakerie and it was a revelation. There were dozens of different sorts and types of bread on display with flavours that I had never experienced. I didn't know whether to order a loaf or to go for my camera instead. And that is the point. Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle.

We are all familiar with crapification and I contend that it is automation that enables this to become a thing.

WobblyTelomeres , October 25, 2019 at 11:08 am

"I contend that it is automation that enables this to become a thing."

As does electricity. And math. Automation doesn't necessarily narrow choices; economies of scale and the profit motive do. What I find annoying (as in pollyannish) is the avoidance of the issue of those that cannot operate the machinery, those that cannot open their own store, etc.

I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell.

washparkhorn , October 26, 2019 at 3:25 am

A capitalist market that fails to account for the cost of a product's negative externalities is underpricing (and incentivizing more of the same). It's cheating (or sanctioned cheating due to ignorance and corruption). It is not capitalism (unless that is the only reasonable outcome of capitalism).

Tom Pfotzer , October 25, 2019 at 11:33 am

The author's vision of "appropriate tech" local enterprise supported by relatively simple automation is also my answer to the vexing question of "how do I cope with automation?"

In a recent posting here at NC, I said the way to cope with automation of your job(s) is to get good at automation. My remark caused a howl of outrage: "most people can't do automation! Your solution is unrealistic for the masses. Dismissed with prejudice!".

Thank you for that outrage, as it provides a wonder foil for this article. The article shows a small business which learned to re-design business processes, acquire machines that reduce costs. It's a good example of someone that "got good at automation". Instead of being the victim of automation, these people adapted. They bought automation, took control of it, and operated it for their own benefit.

Key point: this entrepreneur is now harvesting the benefits of automation, rather than being systematically marginalized by it. Another noteworthy aspect of this article is that local-scale "appropriate" automation serves to reduce the scale advantages of the big players. The availability of small-scale machines that enable efficiencies comparable to the big guys is a huge problem. Most of the machines made for small-scale operators like this are manufactured in China, or India or Iran or Russia, Italy where industrial consolidation (scale) hasn't squashed the little players yet.

Suppose you're a grain farmer, but only have 50 acres (not 100s or 1000s like the big guys). You need a combine – that's a big machine that cuts grain stalk and separate grain from stalk (threshing). This cut/thresh function is terribly labor intensive, the combine is a must-have. Right now, there is no small-size ($50K or less) combine manufactured in the U.S., to my knowledge. They cost upwards of $200K, and sometimes a great deal more. The 50-acre farmer can't afford $200K (plus maint costs), and therefore can't farm at that scale, and has to sell out.

So, the design, production, and sales of these sort of small-scale, high-productivity machines is what is needed to re-distribute production (organically, not by revolution, thanks) back into the hands of the middle class.

If we make possible for the middle class to capture the benefits of automation, and you solve 1) the social dilemmas of concentration of wealth, 2) the declining std of living of the mid- and lower-class, and 3) have a chance to re-design an economy (business processes and collaborating suppliers to deliver end-user product/service) that actually fixes the planet as we make our living, instead of degrading it at every ka-ching of the cash register.

Point 3 is the most important, and this isn't the time or place to expand on that, but I hope others might consider it a bit.

marcel , October 25, 2019 at 12:07 pm

Regarding the combine, I have seen them operating on small-sized lands for the last 50 years. Without exception, you have one guy (sometimes a farmer, often not) who has this kind of harvester, works 24h a day for a week or something, harvesting for all farmers in the neighborhood, and then moves to the next crop (eg corn). Wintertime is used for maintenance. So that one person/farm/company specializes in these services, and everybody gets along well.

Tom Pfotzer , October 25, 2019 at 2:49 pm

Marcel – great solution to the problem. Choosing the right supplier (using combine service instead of buying a dedicated combine) is a great skill to develop. On the flip side, the fellow that provides that combine service probably makes a decent side-income from it. Choosing the right service to provide is another good skill to develop.

Jesper , October 25, 2019 at 5:59 pm

One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active.

What would happen if efficiency gains continued to be distributed according to the market? According to the relative bargaining power of the market participants where one side, the public good as represented by government, is asking for and therefore getting almost nothing?

As is, I do believe that people who are concerned do have reason to be concerned.

Kent , October 25, 2019 at 11:33 am

"Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine.

Brooklin Bridge , October 25, 2019 at 12:02 pm

Some people make a living saying these sorts of things about automation. The quality of French bread is simply not what it used to be (at least harder to find) though that is a complicated subject having to do with flour and wheat as well as human preparation and many other things and the cost (in terms of purchasing power), in my opinion, has gone up, not down since the 70's.

As some might say, "It's complicated," but automation does (not sure about "has to") come with trade offs in quality while price remains closer to what an ever more sophisticated set of algorithms say can be "gotten away with."

This may be totally different for cars or other things, but the author chose French bread and the only overall improvement, or even non change, in quality there has come, if at all, from the dark art of marketing magicians.

Brooklin Bridge , October 25, 2019 at 12:11 pm

/ from the dark art of marketing magicians, AND people's innate ability to accept/be unaware of decreases in quality/quantity if they are implemented over time in small enough steps.

Michael , October 25, 2019 at 1:47 pm

You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. But you're unlikely to find it where tourists might wander: the rent is too high.

As a general rule, if the bakers have a large staff or speak English you're probably in the wrong bakery. Except for one of my favorites where she learned her English watching every episode of Friends multiple times and likes to practice with me, though that's more of a fluke.

Brooklin Bridge , October 25, 2019 at 3:11 pm

It's a difficult subject to argue. I suspect that comparatively speaking, French bread remains good and there are still bakers who make high quality bread (given what they have to work with). My experience when talking to family in France (not Paris) is that indeed, they are in general quite happy with the quality of bread and each seems to know a bakery where they can still get that "je ne sais quoi" that makes it so special.

I, on the other hand, who have only been there once every few years since the 70's, kind of like once every so many frames of the movie, see a lowering of quality in general in France and of flour and bread in particular though I'll grant it's quite gradual.

The French love food and were among the best farmers in the world in the 1930s and have made a point of resisting radical change at any given point in time when it comes to the things they love (wine, cheese, bread, etc.) , so they have a long way to fall, and are doing so slowly; but gradually, it's happening.

I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization.

Oregoncharles , October 26, 2019 at 12:58 am

I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV.

I thought, but I don't think I said very articulately, that of course, they thought of themselves as craftspeople – making people's food, after all. It was a fundamental culture clash. All that was 50 years ago; looks like the European attitude has been receding.

Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question.

Carolinian , October 25, 2019 at 12:42 pm

I have been touring around by car and was surprised to see that all Oregon gas stations are full serve with no self serve allowed (I vaguely remember Oregon Charles talking about this). It applies to every station including the ones with a couple of dozen pumps like we see back east. I have since been told that this system has been in place for years.

It's hard to see how this is more efficient and in fact just the opposite as there are fewer attendants than waiting customers and at a couple of stations the action seemed chaotic. Gas is also more expensive although nothing could be more expensive than California gas (over $5/gal occasionally spotted). It's also unclear how this system was preserved–perhaps out of fire safety concerns–but it seems unlikely that any other state will want to imitate just as those bakeries aren't going to bring back their wood fired ovens.

JohnnyGL , October 25, 2019 at 1:40 pm

I think NJ is still required to do all full-serve gas stations. Most in MA have only self-serve, but there's a few towns that have by-laws requiring full-serve.

Brooklin Bridge , October 25, 2019 at 2:16 pm

I'm not sure just how much I should be jumping up and down about our ability to get more gasoline into our cars quicker. But convenient for sure.

The Observer , October 25, 2019 at 4:33 pm

In the 1980s when self-serve gas started being implemented, NIOSH scientists said oh no, now 'everyone' will be increasingly exposed to benzene while filling up. Benzene is close to various radioactive elements in causing damage and cancer.

Oregoncharles , October 26, 2019 at 1:06 am

It was preserved by a series of referenda; turns out it's a 3rd rail here, like the sales tax. The motive was explicitly to preserve entry-level jobs while allowing drivers to keep the gas off their hands. And we like the more personal quality.

Also, we go to states that allow self-serve and observe that the gas isn't any cheaper. It's mainly the tax that sets the price, and location.

There are several bakeries in this area with wood-fired ovens. They charge a premium, of course. One we love is way out in the country, in Falls City. It's a reason to go there.

shinola , October 25, 2019 at 12:47 pm

Unless I misunderstood, the author of this article seems to equate mechanization/automation of nearly any type with robotics.

"Is the cash register really a robot? James Ritty, who invented it, didn't think so;" – Nor do I.

To me, "robot" implies a machine with a high degree of autonomy. Would the author consider an old fashioned manual typewriter or adding machine (remember those?) to be robotic? How about when those machines became electrified?

I think the author uses the term "robot" over broadly.

Dan , October 25, 2019 at 1:05 pm

Agree. Those are just electrified extensions of the lever or sand timer. It's the "thinking" that is A.I.

Refuse to allow A.I.to destroy jobs and cheapen our standard of living. Never interact with a robo call, just hang up. Never log into a website when there is a human alternative. Refuse to do business with companies that have no human alternative. Never join a medical "portal" of any kind, demand to talk to medical personnel. Etc.

Sabotage A.I. whenever possible. The Ten Commandments do not apply to corporations.

https://medium.com/@TerranceT/im-never-going-to-stop-stealing-from-the-self-checkout-22cbfff9919b

Sancho Panza , October 25, 2019 at 1:52 pm

During a Chicago hotel stay my wife ordered an extra bath towel from the front desk. About 5 minutes later, a mini version of R2D2 rolled up to her door with towel in tow. It was really cute and interacted with her in a human-like way. Cute but really scary in the way that you indicate in your comment.

It seems many low wage activities would be in immediate risk of replacement. But sabotage? I would never encourage sabotage; in fact, when it comes to true robots like this one, I would highly discourage any of the following: yanking its recharge cord in the middle of the night, zapping it with a car battery, lift its payload and replace with something else, give it a hip high-five to help it calibrate its balance, and of course, the good old kick'm in the bolts.

Sancho Panza , October 26, 2019 at 9:53 am

Here's a clip of that robot, Leo, bringing bottled water and a bath towel to my wife.
https://www.youtube.com/watch?v=TXygNznHSs0

Barbara , October 26, 2019 at 11:48 am

Stop and Shop supermarket chain now has robots in the store. According to Stop and Shop they are oh so innocent! and friendly! why don't you just go up and say hello?

All the robots do, they say, go around scanning the shelves looking for: shelf price tags that don't match the current price, merchandise in the wrong place (that cereal box you picked up in the breakfast aisle and decided, in the laundry aisle, that you didn't want and put the box on a shelf with detergent.) All the robots do is notify management of wrong prices and misplaced merchandise.

The damn robot is cute, perky lit up eyes and a smile – so why does it remind me of the Stepford Wives.

S&S is the closest supermarket near me, so I go there when I need something in a hurry, but the bulk of my shopping is now done elsewhere. Thank goodness there are some stores that are not doing this: The area Shoprites and FoodTown's don't – and they are all run by family businesses. Shoprite succeeds by have a large assortment brands in every grocery category and keeping prices really competitive. FoodTown operates at a higher price and quality level with real butcher and seafood counters as well as prepackaged assortments in open cases and a cooked food counter of the most excellent quality with the store's cooks behind the counter to serve you and answer questions. You never have to come home from work tired and hungry and know that you just don't want to cook and settle for a power bar.

Carolinian , October 25, 2019 at 1:11 pm

A robot is a machine -- especially one programmable by a computer -- capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded

https://en.wikipedia.org/wiki/Robot

Those early cash registers were perhaps an early form of analog computer. But Wiki reminds that the origin of the term is a work of fiction.

The term comes from a Czech word, robota, meaning "forced labor";the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by the Czech writer, Karel Čapek

shinola , October 25, 2019 at 4:26 pm

Perhaps I didn't qualify "autonomous" properly. I didn't mean to imply a 'Rosie the Robot' level of autonomy but the ability of a machine to perform its programmed task without human intervention (other than switching on/off or maintenance & adjustments).

If viewed this way, an adding machine or typewriter are not robots because they require constant manual input in order to function – if you don't push the keys, nothing happens. A computer printer might be considered robotic because it can be programmed to function somewhat autonomously (as in print 'x' number of copies of this document).

"Robotics" is a subset of mechanized/automated functions.

Stephen Gardner , October 25, 2019 at 4:48 pm

When I first got out of grad school I worked at United Technologies Research Center where I worked in the robotics lab. In general, at least in those days, we made a distinction between robotics and hard automation. A robot is programmable to do multiple tasks and hard automation is limited to a single task unless retooled. The machines the author is talking about are hard automation. We had ASEA robots that could be programmed to do various things. One of ours drilled, riveted and sealed the skin on the horizontal stabilators (the wing on the tail of a helicopter that controls pitch) of a Sikorsky Sea Hawk.

The same robot with just a change of the fixture on the end could be programmed to paint a car or weld a seam on equipment. The drilling and riveting robot was capable of modifying where the rivets were placed (in the robot's frame of reference) based on the location of precisely milled blocks build into the fixture that held the stabilator.

There was always some variation and it was important to precisely place the rivets because the spars were very narrow (weight at the tail is bad because of the lever arm). It was considered state of the art back in the day but now auto companies have far more sophisticated robotics.

Socal Rhino , October 25, 2019 at 1:44 pm

But what happens when the bread machine is connected to the internet, can't function without an active internet connection, and requires an annual subscription to use?

That is the issue to me: however we define the tools, who will own them?

The Rev Kev , October 25, 2019 at 6:53 pm

You know, that is quite a good point that. It is not so much the automation that is the threat as the rent-seeking that anything connected to the internet allows to be implemented.

*_* , October 25, 2019 at 2:28 pm

Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits.

breadbaker , October 25, 2019 at 2:29 pm

The story about automation not worsening the quality of bread is not exactly true. Bakers had to develop and incorporate a new method called autolyze ( https://www.kingarthurflour.com/blog/2017/09/29/using-the-autolyse-method ) in the mid-20th-century to bring back some of the flavor lost with modern baking. There is also a trend of a new generation of bakeries that use natural yeast, hand shaping and kneading to get better flavors and quality bread.

But it is certainly true that much of the automation gives almost as good quality for much lower labor costs.

Tom Pfotzer , October 25, 2019 at 3:05 pm

On the subject of the machine-robot continuum

When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot".

How you implement a, b, and c above can have more or less sophistication, depending upon the complexity, variability, etc. of the environment, or the solutions, or the means used to affect the environment.

A machine, like a typewriter, or a lawn-mower engine has the logic expressed in metal; it's static.

The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate.

And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble.

:)

Phacops , October 25, 2019 at 3:08 pm

Sometimes automation is necessary to eliminate the risks of manual processes. There are parenteral (injectable) drugs that cannot be sterilized except by filtration. Most of the work of filling, post filling processing, and sealing is done using automation in areas that make surgical suites seem filthy and people are kept from these operations.

Manual operations are only undertaken to correct issues with the automation and the procedures are tested to ensure that they do not introduce contamination, microbial or otherwise. Because even one non-sterile unit is a failure and testing is destructive process, of course any full lot of product cannot be tested to state that all units are sterile. Periodic testing of the automated process and manual intervention is done periodically and it is expensive and time consuming to test to a level of confidence that there is far less than a one in a million chance of any unit in a lot being non sterile.

In that respect, automation and the skills necessary to interface with it are fundamental to the safety of drugs frequently used on already compromised patients.

Brooklin Bridge , October 25, 2019 at 3:27 pm

Agree. Good example. Digital technology and miniaturization seem particularly well suited to many aspect of the medical world. But doubt they will eliminate the doctor or the nurse very soon. Insurance companies on the other hand

lyman alpha blob , October 25, 2019 at 8:34 pm

Bill Burr has some thoughts on self checkouts and the potential bonanza for shoppers – https://www.youtube.com/watch?v=FxINJzqzn4w

TG , October 26, 2019 at 11:51 am

"There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost." WRONG! If you had an unlimited supply of 50-cents-an-hour disposable labor, mixing and kneading the dough by hand would be cheaper. It is only because labor is expensive in France that the machine saves money.

In Japan there is a lot of automation, and wages and living standards are high. In Bangladesh there is very little automation, and wages and livings standards are very low.

Are we done with the 'automation is destroying jobs' meme yet? Excessive population growth is the problem, not robots. And the root cause of excessive population growth is the corporate-sponsored virtual taboo of talking about it seriously.

[Feb 18, 2020] Articles on Linux by Ken Hess

Jul 13, 2019 | www.linuxtoday.com

[Feb 18, 2020] Setup Local Yum Repository On CentOS 7

Aug 27, 2014 | www.unixmen.com

This tutorial describes how to setup a local Yum repository on CentOS 7 system. Also, the same steps should work on RHEL and Scientific Linux 7 systems too.

If you have to install software, security updates and fixes often in multiple systems in your local network, then having a local repository is an efficient way. Because all required packages are downloaded over the fast LAN connection from your local server, so that it will save your Internet bandwidth and reduces your annual cost of Internet.

In this tutorial, I use two systems as described below:

Yum Server OS         : CentOS 7 (Minimal Install)
Yum Server IP Address : 192.168.1.101
Client OS             : CentOS 7 (Minimal Install)
Client IP Address     : 192.168.1.102
Prerequisites

First, mount your CentOS 7 installation DVD. For example, let us mount the installation media on /mnt directory.

mount /dev/cdrom /mnt/

Now the CentOS installation DVD is mounted under /mnt directory. Next install vsftpd package and let the packages available over FTP to your local clients.

To do that change to /mnt/Packages directory:

cd /mnt/Packages/

Now install vsftpd package:

rpm -ivh vsftpd-3.0.2-9.el7.x86_64.rpm

Enable and start vsftpd service:

systemctl enable vsftpd
systemctl start vsftpd

We need a package called "createrepo" to create our local repository. So let us install it too.

If you did a minimal CentOS installation, then you might need to install the following dependencies first:

rpm -ivh libxml2-python-2.9.1-5.el7.x86_64.rpm 
rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm 
rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm

Now install "createrepo" package:

rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm
Build Local Repository

It's time to build our local repository. Create a storage directory to store all packages from CentOS DVD's.

As I noted above, we are going to use a FTP server to serve all packages to client systems. So let us create a storage location in our FTP server pub directory.

mkdir /var/ftp/pub/localrepo

Now, copy all the files from CentOS DVD(s) i.e from /mnt/Packages/ directory to the "localrepo" directory:

cp -ar /mnt/Packages/*.* /var/ftp/pub/localrepo/

Again, mount the CentOS installation DVD 2 and copy all the files to /var/ftp/pub/localrepo directory.

Once you copied all the files, create a repository file called "localrepo.repo" under /etc/yum.repos.d/ directory and add the following lines into the file. You can name this file as per your liking:

vi /etc/yum.repos.d/localrepo.repo

Add the following lines:

[localrepo]
name=Unixmen Repository
baseurl=file:///var/ftp/pub/localrepo
gpgcheck=0
enabled=1

Note: Use three slashes(///) in the baseurl.

Now, start building local repository:

createrepo -v /var/ftp/pub/localrepo/

Now the repository building process will start.

Sample Output:

root@server:-mnt-Packages_002

Now, list out the repositories using the following command:

yum repolist

Sample Output:

repo id                                                                    repo name                                                                     status
base/7/x86_64                                                              CentOS-7 - Base                                                               8,465
extras/7/x86_64                                                            CentOS-7 - Extras                                                                30
localrepo                                                                  Unixmen Repository                                                            3,538
updates/7/x86_64                                                           CentOS-7 - Updates                                                              726

Clean the Yum cache and update the repository lists:

yum clean all
yum update

After creating the repository, disable or rename the existing repositories if you only want to install packages from the local repository itself.

Alternatively, you can install packages only from the local repository by mentioning the repository as shown below.

yum install --disablerepo="*" --enablerepo="localrepo" httpd

Sample Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================================
 Package                              Arch                            Version                                         Repository                          Size
===============================================================================================================================================================
Installing:
 httpd                                x86_64                          2.4.6-17.el7.centos.1                           localrepo                          2.7 M
Installing for dependencies:
 apr                                  x86_64                          1.4.8-3.el7                                     localrepo                          103 k
 apr-util                             x86_64                          1.5.2-6.el7                                     localrepo                           92 k
 httpd-tools                          x86_64                          2.4.6-17.el7.centos.1                           localrepo                           77 k
 mailcap                              noarch                          2.1.41-2.el7                                    localrepo                           31 k

Transaction Summary
===============================================================================================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]:

Disable Firewall And SELinux:

As we are going to use the local repository only in our local area network, there is no need for firewall and SELinux. So, to reduce the complexity, I disabled both Firewalld and SELInux.

To disable the Firewalld, enter the following commands:

systemctl stop firewalld
systemctl disable firewalld

To disable SELinux, edit file /etc/sysconfig/selinux ,

vi /etc/sysconfig/selinux

Set SELINUX=disabled.

[...]
SELINUX=disabled
[...]

Reboot your server to take effect the changes.

Client Side Configuration

Now, go to your client systems. Create a new repository file as shown above under /etc/yum.repos.d/ directory.

vi /etc/yum.repos.d/localrepo.repo

and add the following contents:

[localrepo]
name=Unixmen Repository
baseurl=ftp://192.168.1.101/pub/localrepo
gpgcheck=0
enabled=1

Note: Use double slashes in the baseurl and 192.168.1.101 is yum server IP Address.

Now, list out the repositories using the following command:

yum repolist

Clean the Yum cache and update the repository lists:

yum clean all
yum update

Disable or rename the existing repositories if you only want to install packages from the server local repository itself.

Alternatively, you can install packages from the local repository by mentioning the repository as shown below.

yum install --disablerepo="*" --enablerepo="localrepo" httpd

Sample Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package          Arch        Version                      Repository      Size
================================================================================
Installing:
 httpd            x86_64      2.4.6-17.el7.centos.1        localrepo      2.7 M
Installing for dependencies:
 apr              x86_64      1.4.8-3.el7                  localrepo      103 k
 apr-util         x86_64      1.5.2-6.el7                  localrepo       92 k
 httpd-tools      x86_64      2.4.6-17.el7.centos.1        localrepo       77 k
 mailcap          noarch      2.1.41-2.el7                 localrepo       31 k

Transaction Summary
================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]: y
Downloading packages:
(1/5): apr-1.4.8-3.el7.x86_64.rpm                          | 103 kB   00:01     
(2/5): apr-util-1.5.2-6.el7.x86_64.rpm                     |  92 kB   00:01     
(3/5): httpd-tools-2.4.6-17.el7.centos.1.x86_64.rpm        |  77 kB   00:00     
(4/5): httpd-2.4.6-17.el7.centos.1.x86_64.rpm              | 2.7 MB   00:00     
(5/5): mailcap-2.1.41-2.el7.noarch.rpm                     |  31 kB   00:01     
--------------------------------------------------------------------------------
Total                                              1.0 MB/s | 3.0 MB  00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : apr-1.4.8-3.el7.x86_64                                       1/5 
  Installing : apr-util-1.5.2-6.el7.x86_64                                  2/5 
  Installing : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     3/5 
  Installing : mailcap-2.1.41-2.el7.noarch                                  4/5 
  Installing : httpd-2.4.6-17.el7.centos.1.x86_64                           5/5 
  Verifying  : mailcap-2.1.41-2.el7.noarch                                  1/5 
  Verifying  : httpd-2.4.6-17.el7.centos.1.x86_64                           2/5 
  Verifying  : apr-util-1.5.2-6.el7.x86_64                                  3/5 
  Verifying  : apr-1.4.8-3.el7.x86_64                                       4/5 
  Verifying  : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     5/5 

Installed:
  httpd.x86_64 0:2.4.6-17.el7.centos.1                                          

Dependency Installed:
  apr.x86_64 0:1.4.8-3.el7                      apr-util.x86_64 0:1.5.2-6.el7   
  httpd-tools.x86_64 0:2.4.6-17.el7.centos.1    mailcap.noarch 0:2.1.41-2.el7   

Complete!

That's it. Now, you will be able to install softwares from your server local repository.

Cheers!

[Feb 16, 2020] Recover deleted files in Debian with TestDisk

Images deletes; see the original link for details
Feb 16, 2020 | vitux.com

... ... ...

You can verify if the utility is indeed installed on your system and also check its version number by using the following command:

$ testdisk --version

Or,

$ testdisk -v

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png" alt="Check TestDisk version" width="734" height="216" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png 734w, https://vitux.com/wp-content/uploads/2019/10/word-image-64-300x88.png 300w" sizes="(max-width: 734px) 100vw, 734px" />

Step 2: Run TestDisk and create a new testdisk.log file

Use the following command in order to run the testdisk command line utility:

$ sudo testdisk

The output will give you a description of the utility. It will also let you create a testdisk.log file. This file will later include useful information about how and where your lost file was found, listed and resumed.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png" alt="Using Testdisk" width="736" height="411" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-65-300x168.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The above output gives you three options about what to do with this file:

Create: (recommended)- This option lets you create a new log file.

Append: This option lets you append new information to already listed information in this file from any previous session.

No Log: Choose this option if you do not want to record anything about the session for later use.

Important: TestDisk is a pretty intelligent tool. It does know that many beginners will also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can select an option through the up and down arrow keys and then entering to make your choice.

In the above output, I would opt for creating a new log file. The system might ask you the password for sudo at this point.

Step 3: Select your recovery drive

The utility will now display a list of drives attached to your system. In my case, it is showing my hard drive as it is the only storage device on my system.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png" alt="Choose recovery drive" width="729" height="493" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png 729w, https://vitux.com/wp-content/uploads/2019/10/word-image-66-300x203.png 300w" sizes="(max-width: 729px) 100vw, 729px" />

Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.

Step 4: Select Partition Table Type of your Selected Drive

Now that you have selected a drive, you need to specify its partition table type of your on the following screen:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png" alt="Choose partition table" width="736" height="433" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-67-300x176.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The utility will automatically highlight the correct choice. Press Enter to continue.

If you are sure that the testdisk intelligence is incorrect, you can make the correct choice from the list and then hit Enter.

Step 5: Select the 'Advanced' option for file recovery

When you have specified the correct drive and its partition type, the following screen will appear:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png" alt="Advanced file recovery options" width="736" height="446" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-68-300x182.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

Recovering lost files is only one of the features of testdisk, the utility offers much more than that. Through the options displayed in the above screenshot, you can select any of those features. But here we are interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit enter.

In this utility if you reach a point you did not intend to, you can go back by using the q key.

Step 6: Select the drive partition where you lost the file

If your selected drive has multiple partitions, the following screen lets you choose the relevant one from them.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the options shown at the bottom of the screen.

This will list all the directories on your partition.

Step 7: Browse to the directory from where you lost the file

When the testdisk utility displays all the directories of your operating system, browse to the directory from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home directory. So I will browse to home:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png" alt="Select directory" width="733" height="458" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png 733w, https://vitux.com/wp-content/uploads/2019/10/word-image-70-300x187.png 300w" sizes="(max-width: 733px) 100vw, 733px" />

My username (sana):

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png" alt="Choose user folder" width="735" height="449" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-71-300x183.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

And then the Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png" alt="Choose downloads" width="738" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png 738w, https://vitux.com/wp-content/uploads/2019/10/word-image-72-300x185.png 300w" sizes="(max-width: 738px) 100vw, 738px" />

Tip: You can use the left arrow to go back to the previous directory.

When you have reached your required directory, you will see the deleted files in colored or highlighted form.

And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this as I had to illustrate the whole process to you.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png" alt="Highlighted files" width="735" height="498" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-73-300x203.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

Step 8: Copy the deleted file to be restored

By now, you must have found your lost file in the list. Use the C option to copy the selected file. This file will later be restored to the location you will specify in the next step:

Step 9: Specify the location where the found file will be restored

Now that we have copied the lost file that we have now found, the testdisk utility will display the following screen so that we can specify where to restore it.

You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your desired location.

I am specifically selecting the location from where I lost the file, my Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png" alt="Choose location to restore file" width="732" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png 732w, https://vitux.com/wp-content/uploads/2019/10/word-image-74-300x187.png 300w" sizes="(max-width: 732px) 100vw, 732px" />

Step 10: Copy/restore the file to the selected location

After making the selection about where you want to restore the file, click the C button. This will restore your file to that location:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png" alt="Restored file successfully" width="735" height="496" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-75-300x202.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

See the text in green in the above screenshot? This is actually great news. Now my file is restored on the specified location.

This might seem to be a slightly long process but it is definitely worth getting your lost file back. The restored file will most probably be in a locked state. This means that only an authorized user can access and open it.

We all need this tool time and again, but if you want to delete it till you further need it you can do so through the following command:

$ sudo apt-get remove testdisk

You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!

Recover deleted files in Debian with TestDisk Karim Buzdar February 11, 2020 Debian , Linux , Shell Market smarter with automated messaging tools. ads via Carbon Search About This Site Vitux.com aims to become a Linux compendium with lots of unique and up to date tutorials. Most Popular Copyright © vitux.com

[Feb 16, 2020] A List Of Useful Console Services For Linux Users by sk

Images deletes; see the original link for details
Feb 13, 2020 | www.ostechnix.com
Cheatsheets for Linux/Unix commands

You probably heard about cheat.sh . I use this service everyday! This is one of the useful service for all Linux users. It displays concise Linux command examples.

For instance, to view the curl command cheatsheet , simply run the following command from your console:

$ curl cheat.sh/curl

It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you the cheatsheets of most Linux and unix commands in couple seconds.

ls command cheatsheet:

$ curl cheat.sh/ls

find command cheatsheet:

$ curl cheat.sh/find

It is highly recommended tool!


Recommended read:


... ... ...

IP Address

We can find the local ip address using ip command. But what about the public IP address? It is simple!

To find your public IP address, just run the following commands from your Terminal:

$ curl ipinfo.io/ip
157.46.122.176
$ curl eth0.me
157.46.122.176
$ curl checkip.amazonaws.com
157.46.122.176
$ curl icanhazip.com
2409:4072:631a:c033:cc4b:4d25:e76c:9042

There is also a console service to display the ip address in JSON format.

$ curl httpbin.org/ip
{
  "origin": "157.46.122.176"
}

... ... ...

Dictionary

Want to know the meanig of an English word? Here is how you can get the meaning of a word – gustatory

$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <100411284.5191.1581597016@pan.alephnull.com>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing

You can share texts via some console services. These text sharing services are often useful for sharing code.

Here is an example.

$ echo "Welcome To OSTechNix!" | curl -F 'f:1=<-' ix.io
http://ix.io/2bCA

The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser by navigating to the URL – http://ix.io/2bCA

Another example:

$ echo "Welcome To OSTechNix!" | curl -F file=@- 0x0.st
http://0x0.st/i-0G.txt
File sharing

Not just text, we can even share files to anyone using a console service called filepush .

$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    72    0     0  100    72      0     54  0:00:01  0:00:01 --:--:--    54http://filepush.co/8x6h/ostechnix.txt
100   110  100    38  100    72     27     53  0:00:01  0:00:01 --:--:--    81

The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating to the link – http://filepush.co/8x6h/ostechnix.txt

Another text sharing console service is termbin :

$ echo "Welcome To OSTechNix!" | nc termbin.com 9999

There is also another console service named transfer.sh . But it doesn't work at the time of writing this guide.

Browser

There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using command:

$ ssh brow.sh

Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer the following guide for more details.

Create QR codes for given string

Do you want to create QR-codes for a given string? That's easy!

$ curl qrenco.de/ostechnix

Here is the QR code for "ostechnix" string.

URL Shortners

Want to shorten a long URLs shorter to make them easier to post or share with your friends? Use Tinyurl console service to shorten them:

$ curl -s http://tinyurl.com/api-create.php?url=https://www.ostechnix.com/pigz-compress-and-decompress-files-in-parallel-in-linux/
http://tinyurl.com/vkc5c5p

[Feb 14, 2020] The trouble with Artificial Intelligence

Feb 14, 2020 | www.moonofalabama.org

Hoarsewhisperer , Feb 12 2020 6:36 utc | 43

Posted by: juliania | Feb 12 2020 5:15 utc | 39
(Artificial Intelligence)

The trouble with Artificial Intelligence is that it's not intelligent.
And it's not intelligent because it's got no experience, no imagination and no self-control.

[Feb 09, 2020] How To Install And Configure Chrony As NTP Client

See also chrony – Comparison of NTP implementations
Another installation manual Steps to configure Chrony as NTP Server & Client (CentOS-RHEL 8)
Feb 09, 2020 | www.2daygeek.com

It can synchronize the system clock faster with better time accuracy and it can be very much useful for the systems which are not online all the time.

Chronyd is smaller in size, it uses less system memory and it wakes up the CPU only when necessary, which is better for power saving.

It can perform well even when the network is congested for longer periods of time.

You can use any of the below commands to check Chrony status.

To check chrony tracking status.

# chronyc tracking

Reference ID    : C0A80105 (CentOS7.2daygeek.com)
Stratum         : 3
Ref time (UTC)  : Thu Mar 28 05:57:27 2019
System time     : 0.000002545 seconds slow of NTP time
Last offset     : +0.001194361 seconds
RMS offset      : 0.001194361 seconds
Frequency       : 1.650 ppm fast
Residual freq   : +184.101 ppm
Skew            : 2.962 ppm
Root delay      : 0.107966967 seconds
Root dispersion : 1.060455322 seconds
Update interval : 2.0 seconds
Leap status     : Normal

Run the sources command to displays information about the current time sources.

# chronyc sources

210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* CentOS7.2daygeek.com          2   6    17    62    +36us[+1230us] +/- 1111ms

[Feb 05, 2020] How to disable startup graphic in CentOS

Feb 05, 2020 | forums.centos.org

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.

<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help.

See also https://www.youtube.com/watch?v=oFl40XzlXp4

[Feb 05, 2020] Disable startup graphic

This is still a problem today... See also centOS 7 hung at "Starting Plymouth switch root service"
Feb 05, 2020 | forums.centos.org
disable startup graphic

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help. Top
User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/20 23:09:40

The file to amend now is /boot/grub2/grub.cfg and also /etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o /boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg file directly though your changes will be wiped out next kernel install if you don't also edit the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
neuronetv
Posts: 76
Joined: 2012/01/08 21:53:07
Re: disable startup graphic

Post by neuronetv " 2014/08/21 13:12:45

thanks for that, I did the edits and now the scroll is back. Top
larryg
Posts: 3
Joined: 2014/07/17 04:48:28
Re: disable startup graphic

Post by larryg " 2014/08/21 19:27:16

The preferred method to do this is using the command plymouth-set-default-theme.

If you enter this command, without parameters, as user root you'll see something like
>plymouth-set-default-theme
charge
details
text

This lists the themes installed on your computer. The default is 'charge'. If you want to see the boot up details you used to see in version 6, try
>plymouth-set-default-theme details

Followed by the command
>dracut -f

Then reboot.

This process modifies the boot loader so you won't have to update your grub.conf file manually everytime for each new kernel update.

There are numerous themes available you can download from CentOS or in general. Just google 'plymouth themes' to see other possibilities, if you're looking for graphics type screens. Top

User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/21 22:47:49

Editing /etc/default/grub to remove rhgb quiet makes it permanent too. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
MalAdept
Posts: 1
Joined: 2014/11/02 20:06:27
Re: disable startup graphic

Post by MalAdept " 2014/11/02 20:23:37

I tried both TrevorH's and LarryG's methods, and LarryG wins.

Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want, but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put "rhgb quiet" back in and got my 1920x1080 back.

Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting without loss of display resolution. Thanks LarryG! Top

dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 00:17:18

I have used this mod to get back the details for grub boot, thanks to all for that info.

However when I am watching it fills the page and then rather than scrolling up as it did in V5 it blanks and starts again at the top. Of course there is FAIL message right before it blanks :lol: that I want to see and I can't slam the Scroll Lock fast enough to catch it. Anyone know how to get the details to scroll up rather than the blank and re-write?

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 09:15:51

Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure. Top
dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 14:18:29

aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure.
Thanks for the followup aks. Actually I have found that the Scroll Lock does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it before the screen blanks and then starts writing again. What I am really wondering is how to get the screen to scroll up when it gets to the bottom of the screen rather than blanking and starting to write again at the top. That is annoying! :x

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 19:14:29

Yes it is and no you can't. Kudos to Lennard for making or lives so much shitter....

[Feb 05, 2020] How do deactivate plymouth boot screen?

Jan 01, 2012 | askubuntu.com

Ask Question Asked 8 years ago Active 7 years, 7 months ago Viewed 57k times


> ,

11

Jo-Erlend Schinstad , 2012-01-25 22:06:57

Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using Precise, but I suspect a solution for 11.10 would work just as well.

WinEunuuchs2Unix , 2017-07-21 22:08:06

Did you try: sudo update-initramfs – mgajda Jun 19 '12 at 0:54

> ,

17

Panther ,

Easiest quick fix is to edit the grub line as you boot.

Hold down the shift key so you see the menu. Hit the e key to edit

Edit the 'linux' line, remove the 'quiet' and 'splash'

To disable it in the long run

Edit /etc/default/grub

Change the line – GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to

GRUB_CMDLINE_LINUX_DEFAULT=""

And then update grub

sudo update-grub

Panther , 2016-10-27 15:43:04

Removing quiet and splash removes the splash, but I still only have a purple screen with no text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at 22:25

Tuminoid ,

How about pressing CTRL+ALT+F2 for console allowing you to see whats going on.. You can go back to GUI/Plymouth by CTRL+ALT+F7 .

Don't have my laptop here right now, but IIRC Plymouth has upstart job in /etc/init , named plymouth???.conf, renaming that probably achieves what you want too more permanent manner.

Jānis Elmeris , 2013-12-03 08:46:54

No, there's nothing on the other consoles. – Jo-Erlend Schinstad Jan 25 '12 at 22:22

[Feb 01, 2020] Basic network troubleshooting in Linux with nmap Enable Sysadmin

Feb 01, 2020 | www.redhat.com

Determine this host's OS with the -O switch:

$ sudo nmap -O <Your-IP>

The results look like this:

....

[ You might also like: Six practical use cases for Nmap ]

Then, run the following to check the common 2000 ports, which handle the common TCP and UDP services. Here, -Pn is used to skip the ping scan after assuming that the host is up:

$ sudo nmap -sS -sU -PN <Your-IP>

The results look like this:

...

Note: The -Pn combo is also useful for checking if the host firewall is blocking ICMP requests or not.

Also, as an extension to the above command, if you need to scan all ports instead of only the 2000 ports, you can use the following to scan ports from 1-66535:

$ sudo nmap -sS -sU -PN -p 1-65535 <Your-IP>

The results look like this:

...

You can also scan only for TCP ports (default 1000) by using the following:

$ sudo nmap -sT <Your-IP>

The results look like this:

...

Now, after all of these checks, you can also perform the "all" aggressive scans with the -A option, which tells Nmap to perform OS and version checking using -T4 as a timing template that tells Nmap how fast to perform this scan (see the Nmap man page for more information on timing templates):

$ sudo nmap -A -T4 <Your-IP>

The results look like this, and are shown here in two parts:

...

There you go. These are the most common and useful Nmap commands. Together, they provide sufficient network, OS, and open port information, which is helpful in troubleshooting. Feel free to comment with your preferred Nmap commands as well.

[ Readers also liked: My 5 favorite Linux sysadmin tools ]

Related Stories:

[Jan 25, 2020] timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time

You can achieve the same affect with at command which allows more flexible time patterns.
Jan 23, 2020 | linuxize.com

timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time. In other words, timeout allows you to run a command with a time limit. The timeout command is a part of the GNU core utilities package which is installed on almost any Linux distribution.

It is handy when you want to run a command that doesn't have a built-in timeout option.

In this article, we will explain how to use the Linux timeout command.

How to Use the timeout Command #

The syntax for the timeout command is as follows:

timeout [OPTIONS] DURATION COMMAND [ARG]

The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:

When no unit is used, it defaults to seconds. If the duration is set to zero, the associated timeout is disabled.

The command options must be provided before the arguments.

Here are a few basic examples demonstrating how to use the timeout command:

If you want to run a command that requires elevated privileges such as tcpdump , prepend sudo before timeout :

sudo timeout 300 tcpdump -n -w data.pcap
Sending Specific Signal #

If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is reached. You can specify which signal to send using the -s ( --signal ) option.

For example, to send SIGKILL to the ping command after one minute you would use:

sudo timeout -s SIGKILL ping 8.8.8.8

The signal can be specified by its name like SIGKILL or its number like 9 . The following command is identical to the previous one:

sudo timeout -s 9 ping 8.8.8.8

To get a list of all available signals, use the kill -l command:

kill -l
Killing Stuck Processes #

SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes. In that situations, the process continues to run after the termination signal is send.

To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL signal to the managed program that cannot be caught or ignored.

In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after ten seconds:

sudo timeout -k 10 1m ping 8.8.8.8

timeout -k "./test.sh"

killed after the given time limit is reached

Preserving the Exit Status #

timeout returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command.

To return the exit status of the command even when the time limit is reached, use the --preserve-status option:

timeout --preserve-status 5 ping 8.8.8.8
Running in Foreground #

By default, timeout runs the managed command in the background. If you want to run the command in the foreground, use the --foreground option:

timeout --foreground 5m ./script.sh

This option is useful when you want to run an interactive command that requires user input.

Conclusion #

The timeout command is used to run a given command with a time limit.

timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only with two arguments, the duration, and the managed command.

If you have any questions or feedback, feel free to leave a comment.

timeout terminal

Related Tutorials

If you like our content, please consider buying us a coffee.
Thank you for your support!

Buy me a coffee

Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

Subscribe

We'll never share your email address or spam you.

Jan 25, 2020

Pidof Command in Linux
<img alt="" src=/post/pidof-command-in-linux/featured.jpg>

Jan 22, 2020

Tcpdump Command in Linux
<img alt="" src=/post/tcpdump-command-in-linux/featured.jpg>

Jan 17, 2020

Id command in Linux
<img alt="" src=/post/id-command-in-linux/featured.jpg>
Write a comment Please enable JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a> ESC © 2020 Linuxize.com Privacy Policy Contact <div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1" width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>

[Jan 16, 2020] Watch Command in Linux

Jan 16, 2020 | linuxhandbook.com

Last Updated on January 10, 2020 By Abhishek Leave a Comment

Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
watch [options] [command]
Watch command examples
Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?ssl=1" alt="Watch Command" srcset="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?w=800&amp;ssl=1 800w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=300%2C169&amp;ssl=1 300w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Using watch command without any options will use the default parameter of 2.0 second refresh intervals.

As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the free command . This will give you up to date information about our system's memory usage.

watch free

Yes, it is that simple my friends.

Every 2.0s: free                                pop-os: Wed Dec 25 13:47:59 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     3846372    25571572      676612     3178904    27702636
Swap:             0           0           0
Adjust refresh rate of watch command

You can easily change how quickly the output is updated using the -n flag.

watch -n 10 free
Every 10.0s: free                               pop-os: Wed Dec 25 13:58:32 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     4522508    24864196      715600     3210144    26988920
Swap:             0           0           0

This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our output.

Remove title or header info from watch command output
watch -t free

The -t flag removes the title/header information to clean up output. The information will still refresh every 2 seconds but you can change that by combining the -n option.

              total        used        free      shared  buff/cache   available
Mem:       32596848     3683324    25089268     1251908     3824256    27286132
Swap:             0           0           0
Highlight the changes in watch command output

You can add the -d option and watch will automatically highlight changes for us. Let's take a look at this using the date command. I've included a screen capture to show how the highlighting behaves.

Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch

You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of this software. Pipes rely on the | symbol. Not coincidentally, this is called a pipe symbol or sometimes a vertical bar symbol.

watch "cat /var/log/syslog | tail -n 3"

While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2 seconds and any changes will be displayed.

Every 2.0s: cat /var/log/syslog | tail -n 3                                                      pop-os: Wed Dec 25 15:18:06 2019

Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.

Conclusion

Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.

This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses that you would like to share, let us know about them in the comments.

[Jan 16, 2020] Linux tools How to use the ss command by Ken Hess (Red Hat)

ss is the Swiss Army Knife of system statistics commands. It's time to say buh-bye to netstat and hello to ss.
Jan 13, 2020 | www.redhat.com

If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and netstat . The new replacements are ip , dig , and ss , respectively. It's time to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth a mention here because part of netstat 's functionality has been replaced by ip . This article covers the essentials for the ss command so that you don't have to dig (no pun intended) for them.

More Linux resources

Formally, ss is the socket statistics command that replaces netstat . In this article, I provide netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it easy for us to transition into ss from netstat by making some of netstat 's options operate in much the same fashion in ss .

For example, to display TCP sockets, use the -t option:

$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 rhel8:ssh               khess-mac:62036         ESTABLISHED

$ ss -t
State         Recv-Q          Send-Q                    Local Address:Port                   Peer Address:Port          
ESTAB         0               0                          192.168.1.65:ssh                    192.168.1.94:62036

You can see that the information given is essentially the same, but to better mimic what you see in the netstat command, use the -r (resolve) option:

$ ss -tr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:ssh                             khess-mac:62036

And to see port numbers rather than their translations, use the -n option:

$ ss -ntr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:22                              khess-mac:62036

It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So, try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly surprised at the results.

For example, the netstat command with the old standby options -an yield comparable results (which are too long to show here in full):

$ netstat -an |grep LISTEN

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
unix  2      [ ACC ]     STREAM     LISTENING     28165    /run/user/0/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     20942    /var/lib/sss/pipes/private/sbus-dp_implicit_files.642
unix  2      [ ACC ]     STREAM     LISTENING     28174    /run/user/0/bus
unix  2      [ ACC ]     STREAM     LISTENING     20241    /var/run/lsm/ipc/simc
<truncated>

$ ss -an |grep LISTEN

u_str             LISTEN              0                    128                                             /run/user/0/systemd/private 28165                  * 0                   
                                                            
u_str             LISTEN              0                    128                   /var/lib/sss/pipes/private/sbus-dp_implicit_files.642 20942                  * 0                   
                                                            
u_str             LISTEN              0                    128                                                         /run/user/0/bus 28174                  * 0                   
                                                            
u_str             LISTEN              0                    5                                                     /var/run/lsm/ipc/simc 20241                  * 0                   
<truncated>

The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So, there are layout differences even though the displayed information is really the same.

If you're wondering which netstat commands have been replaced by the ip command, here's one for you:

$ netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      all-systems.mcast.net
enp0s3          1      all-systems.mcast.net
lo              1      ff02::1
lo              1      ff01::1
enp0s3          1      ff02::1:ffa6:ab3e
enp0s3          1      ff02::1:ff8d:912c
enp0s3          1      ff02::1
enp0s3          1      ff01::1

$ ip maddr
1:	lo
	inet  224.0.0.1
	inet6 ff02::1
	inet6 ff01::1
2:	enp0s3
	link  01:00:5e:00:00:01
	link  33:33:00:00:00:01
	link  33:33:ff:8d:91:2c
	link  33:33:ff:a6:ab:3e
	inet  224.0.0.1
	inet6 ff02::1:ffa6:ab3e
	inet6 ff02::1:ff8d:912c
	inet6 ff02::1
	inet6 ff01::1

The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can try this one for yourself to compare the two:

$ netstat -s 

Ip:
    Forwarding: 2
    6231 total packets received
    2 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    3104 incoming packets delivered
    2011 requests sent out
    243 dropped because of missing route
<truncated>

$ ss -s

Total: 182
TCP:   3 (estab 1, closed 0, orphaned 0, timewait 0)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  3         2         1        
TCP	  3         2         1        
INET	  7         4         3        
FRAG	  0         0         0

If you figure out how to display the same info with ss , please let me know.

Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.

What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're significantly improving an existing utility, why bother deprecating the other?

There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually embrace ss as its successor.

Want more on networking topics? Check out the Linux networking cheat sheet .

Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. More about me

[Jan 16, 2020] Thirteen Useful Tools for Working with Text on the Command Line - Make Tech Easier

Jan 16, 2020 | www.maketecheasier.com

Thirteen Useful Tools for Working with Text on the Command Line By Karl Wakim – Posted on Jan 9, 2020 Jan 9, 2020 in Linux Text Tool Linux Command Line Featured

GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used correctly.

Here are thirteen powerful text manipulation tools every command-line user should know.

1. cat

Cat was designed to con cat enate files but is most often used to display a single file. Without any arguments, cat reads standard input until Ctrl + D is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly specified with a - .

Cat has a number of useful options, notably:

In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.

cat -n file1 - file3
Linux Text Tools Cat
2. sort

As its name suggests, sort sorts file contents alphabetically and numerically.

Linux Text Tools Sort
3. uniq

Uniq takes a sorted file and removes duplicate lines. It is often chained with sort in a single command.

Linux Text Tools Uniq
4. comm

Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain lines unique to the first and second file respectively, and the third displays those found in both files.

Linux Text Tools Comm
5. cut

Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file or from standard input if no file is specified.

Cutting by character position

The -c option specifies a single character position or one or more ranges of characters.

For example:

Linux Text Tools Cut Char

Cutting by field

Fields are separated by a delimiter consisting of a single character, which is specified with the -d option. The -f option selects a field position or one or more ranges of fields using the same format as above.

Linux Text Tools Cut Field
6. dos2unix

GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It converts CRLF terminators to LF.

In the following example, the file command is used to check the text format before and after using dos2unix .

Linux Text Tools Dos2unix
7. fold

To make long lines of text easier to read and handle, you can use fold , which wraps lines to a specified width.

Fold strictly matches the specified width by default, breaking words where necessary.

fold -w 30 longline.txt
Linux Text Tools Fold

If breaking words is undesirable, you can use the -s option to break at spaces.

fold -w 30 -s longline.txt
Linux Text Tools Fold Spaces
8. iconv

This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.

iconv -f input_encoding -t output_encoding -o output_file input_file

Note: you can list the available encodings with iconv -l

9. sed

sed is a powerful and flexible s tream ed itor, most commonly used to find and replace strings with the following syntax.

The following command will read from the specified file (or standard input), replacing the parts of text that match the regular expression pattern with the replacement string and outputting the result to the terminal.

sed s/pattern/replacement/g filename

To modify the original file instead, you can use the -i flag.

Linux Text Tools Sed
10. wc

The wc utility prints the number of bytes, characters, words, or lines in a file.

Linux Text Tools Wc
11. split

You can use split to divide a file into smaller files, by number of lines, by size, or to a specific number of files.

Splitting by number of lines

split -l num_lines input_file output_prefix
Linux Text Tools Split Lines

Splitting by bytes

split -b bytes input_file output_prefix
Linux Text Tools Split Bytes

Splitting to a specific number of files

split -n num_files input_file output_prefix
Linux Text Tools Split Number
12. tac

Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.

Linux Text Tools Tac
13. tr

The tr tool is used to translate or delete sets of characters.

A set of characters is usually either a string or ranges of characters. For instance:

Refer to the tr manual page for more details.

To translate one set to another, use the following syntax:

tr SET1 SET2

For instance, to replace lowercase characters with their uppercase equivalent, you can use the following:

tr "a-z" "A-Z"
Linux Text Tools Tr

To delete a set of characters, use the -d flag.

tr -d SET
Linux Text Tools Tr D

To delete the complement of a set of characters (i.e. everything except the set), use -dc .

tr -dc SET
Linux Text Tools Tr Dc
Conclusion

There is plenty to learn when it comes to Linux command line. Hopefully, the above commands can help you to better deal with text in the command line.

[Jan 10, 2020] America's Hamster Wheel of 'Career Advancement' by Casey Chalk

Notable quotes:
"... Getting Work Right: Labor and Leisure in a Fragmented World ..."
"... The problem is further compounded by the fact that much of the labor Americans perform isn't actually good ..."
Jan 09, 2020 | www.theamericanconservative.com

We're told that getting ahead at work and reorienting our lives around our jobs will make us happy. So why hasn't it? Many of those who work in the corporate world are constantly peppered with questions about their " career progression ." The Internet is saturated with articles providing tips and tricks on how to develop a never-fail game plan for professional development. Millions of Americans are engaged in a never-ending cycle of résumé-padding that mimics the accumulation of Boy Scout merit badges or A's on report cards except we never seem to get our Eagle Scout certificates or academic diplomas. We're told to just keep going until we run out of gas or reach retirement, at which point we fade into the peripheral oblivion of retirement communities, morning tee-times, and long midweek lunches at beach restaurants.

The idealistic Chris McCandless in Jon Krakauer's bestselling book Into the Wild defiantly declares, "I think careers are a 20th century invention and I don't want one." Anyone who has spent enough time in the career hamster wheel can relate to this sentiment. Is 21st-century careerism -- with its promotion cycles, yearly feedback, and little wooden plaques commemorating our accomplishments -- really the summit of human existence, the paramount paradigm of human flourishing?

Michael J. Noughton, director of the Center for Catholic Studies at the University of St. Thomas, Minnesota, and board chair for Reel Precision Manufacturing, doesn't think so. In his Getting Work Right: Labor and Leisure in a Fragmented World , Noughton provides a sobering statistic: approximately two thirds of employees in the United States are "either indifferent or hostile to their work." That's not just an indicator of professional dissatisfaction; it's economically disastrous. The same survey estimates that employee disengagement is costing the U.S. economy "somewhere between 450-550 billion dollars annually."

The origin of this problem, says Naughton, is an error in how Americans conceive of work and leisure. We seem to err in one of two ways. One is to label our work as strictly a job, a nine-to-five that pays the bills. In this paradigm, leisure is an amusement, an escape from the drudgery of boring, purposeless labor. The other way is that we label our work as a career that provides the essential fulfillment in our lives. Through this lens, leisure is a utility, simply another means to serve our work. Outside of work, we exercise to maintain our health in order to work harder and longer. We read books that help maximize our utility at work and get ahead of our competitors. We "continue our education" largely to further our careers.

Whichever error we fall into, we inevitably end up dissatisfied. The more we view work as a painful, boring chore, the less effective we are at it, and the more complacent and discouraged. Our leisure activities, in turn, no matter how distracting, only compound our sadness, because no amount of games can ever satisfy our souls. Or, if we see our meaning in our work and leisure as only another means of increasing productivity, we inevitably burn out, wondering, perhaps too late in life, what exactly we were working for . As Augustine of Hippo noted, our hearts are restless for God. More recently, C.S. Lewis noted that we yearn to be fulfilled by something that nothing in this world can satisfy. We need both our work and our leisure to be oriented to the transcendent in order to give our lives meaning and purpose.

The problem is further compounded by the fact that much of the labor Americans perform isn't actually good . There are "bad goods" that are detrimental to society and human flourishing. Naughton suggests some examples: violent video games, pornography, adultery dating sites, cigarettes, high-octane alcohol, abortifacients, gambling, usury, certain types of weapons, cheat sheet websites, "gentlemen's clubs," and so on. Though not as clear-cut as the above, one might also add working for the kinds of businesses that contribute to the impoverishment or destruction of our communities, as Tucker Carlson has recently argued .

Why does this matter for professional satisfaction? Because if our work doesn't offer goods and services that contribute to our communities and the common good -- and especially if we are unable to perceive how our labor plays into that common good -- then it will fundamentally undermine our happiness. We will perceive our work primarily in a utilitarian sense, shrugging our shoulders and saying, "it's just a paycheck," ignoring or disregarding the fact that as rational animals we need to feel like our efforts matter.

Economic liberalism -- at least in its purest free-market expression -- is based on a paradigm with nominalist and utilitarian origins that promote "freedom of indifference." In rudimentary terms, this means that we need not be interested in the moral quality of our economic output. If we produce goods that satisfy people's wants, increasing their "utils," as my Econ 101 professor used to say, then we are achieving business success. In this paradigm, we desire an economy that maximizes access to free choice regardless of the content of that choice, because the more choices we have, the more we can maximize our utils, or sensory satisfaction.

The freedom of indifference paradigm is in contrast to a more ancient understanding of economic and civic engagement: a freedom for excellence. In this worldview, "we are made for something," and participation in public acts of virtue is essential both to our own well-being and that of our society. By creating goods and services that objectively benefit others and contributing to an order beyond the maximization of profit, we bless both ourselves and the polis . Alternatively, goods that increase "utils" but undermine the common good are rejected.

Returning to Naughton's distinction between work and leisure, we need to perceive the latter not as an escape from work or a means of enhancing our work, but as a true time of rest. This means uniting ourselves with the transcendent reality from which we originate and to which we will return, through prayer, meditation, and worship. By practicing this kind of true leisure, well treated in a book by Josef Pieper , we find ourselves refreshed, and discover renewed motivation and inspiration to contribute to the common good.

Americans are increasingly aware of the problems with Wall Street conservatism and globalist economics. We perceive that our post-Cold War policies are hurting our nation. Naughton's treatise on work and leisure offers the beginnings of a game plan for what might replace them.

Casey Chalk covers religion and other issues for The American Conservative and is a senior writer for Crisis Magazine. He has degrees in history and teaching from the University of Virginia, and a masters in theology from Christendom College.

[Jan 01, 2020] AI is just a tool, unless it is developed to the point of attaining sentience in which case it becomes slavery, but let's ignore that possibility for now. Capitalists cannot make profits from the tools they own all by the tools themselves. Profits come from unpaid labor. You cannot underpay a tool, and the tool cannot labor by itself.

Jan 01, 2020 | www.moonofalabama.org

Paul Damascene , Dec 29 2019 1:28 utc | 45

vk @38: "...the reality on the field is that capitalism is 0 for 5..."

True, but it is worse than that! Even when we get AI to the level you describe, capitalism will continue its decline.

Henry Ford actually understood Marxist analysis. Despite what many people in the present imagine, Ford had access to sufficient engineering talent to make his automobile manufacturing processes much more automated than he did. Ford understood that improving the efficiency of the manufacturing process was less important than creating a population with sufficient income to purchase his products.

AI is just a tool, unless it is developed to the point of attaining sentience in which case it becomes slavery, but let's ignore that possibility for now. Capitalists cannot make profits from the tools they own all by the tools themselves. Profits come from unpaid labor. You cannot underpay a tool, and the tool cannot labor by itself.

The AI can be a product that is sold, but compared with cars, for example, the quantity of labor invested in AI is minuscule. The smaller the proportion of labor that is in the cost of a product, the smaller the percent of the price that can be realized as profit. To re-boost real capitalist profits you need labor-intensive products. This also ties in with Henry Ford's understanding of economics in that a larger labor force also means a larger market for the capitalist's products.

There are some very obvious products that I can think of involving AI that are also massively labor-intensive that would match the scale of the automotive industry and rejuvenate capitalism, but they would require many $millions in R&D to make them market-ready. Since I want capitalism to die already and get out Re: AI --
Always wondered how pseudo-AI, or enhanced automation, might be constrained by diminishing EROEI.

Unless an actual AI were able to crack the water molecule to release hydrogen in an energy-efficient way, or unless we learn to love nuclear (by cracking the nuclear waste issue), then it seems to me hyper-automated workplaces will be at least as subject to plummeting EROEI as are current workplaces, if not moreso. Is there any reason to think that, including embedded energy in their manufacture, these machines and their workplaces will be less energy intensive than current ones?

Continued


[May 24, 2019] The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them

May 24, 2019 | theregister.co.uk

Pick your poison

The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them . If you don't use Huawei who would you use instead? Cisco? Yes, just open up and let the NSA ream your ports. Oooo, filthy.

If you don't know the chip design, can't verify the construction, don't know the code and can't verify the deployment to the hardware; you are already owned.

The only question is, but which state actor; China, USA, Israel, UK.....? Anonymous Coward

[May 24, 2019] Huawei equipment can't be trusted? As distinct from Cisco which we already have backdoored :]

May 24, 2019 | theregister.co.uk

" The Trump administration, backed by US cyber defense experts, believes that Huawei equipment can't be trusted " .. as distinct from Cisco which we already have backdoored :]

Sir Runcible Spoon
Re: Huawei equipment can't be trusted?

Didn't someone once say "I don't trust anyone who can't be bribed"?

Not sure why that popped into my head.

[May 24, 2019] Deal with longstanding issues like government favoritism toward local companies

May 24, 2019 | theregister.co.uk

How is it that that can be a point of contention ? Name me one country in this world that doesn't favor local companies.

These people company representatives who are complaining about local favoritism would be howling like wolves if Huawei was given favor in the US over any one of them.

I'm not saying that there are no reasons to be unhappy about business with China, but that is not one of them. 6 0 Reply


A.P. Veening , 1 day

Re: "deal with longstanding issues like government favoritism toward local companies"

Name me one country in this world that doesn't favor local companies.

I'll give you two: Liechtenstein and Vatican City, though admittedly neither has a lot of local companies.

STOP_FORTH , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

Doesn't Liechtenstein make most of the dentures in the EU. Try taking a bite out of that market.

Kabukiwookie , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

How can you leave Andorra out of that list?

A.P. Veening , 14 hrs
Re: "deal with longstanding issues like government favoritism toward local companies"

While you are at it, how can you leave Monaco and San Marino out of that list?

[Jan 29, 2019] hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history

This is quite useful command. RPM exists for CentOS7. You need to build on other versions.
Nov 17, 2018 | dvorka.github.io

hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history.

View on GitHub

Configuration

Get most of HSTR by configuring it with:

hstr --show-configuration >> ~/.bashrc

Run hstr --show-configuration to determine what will be appended to your Bash profile. Don't forget to source ~/.bashrc to apply changes.


For more configuration options details please refer to:

Check also configuration examples .

Binding HSTR to Keyboard Shortcut

Bash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard shortcut based on the style you prefer below.

Check your active Bash keymap with:

bind -v | grep editing-mode
bind -v | grep keymap

To determine character sequence emitted by a pressed key in terminal, type Ctrlv and then press the key. Check your current bindings using:

bind -S
Bash Emacs Keymap (default)

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\C-ahstr -- \C-j"'

or CtrlAltr :

bind '"\e\C-r":"\C-ahstr -- \C-j"'

or CtrlF12 :

bind '"\e[24;5~":"\C-ahstr -- \C-j"'

Bind HSTR to Ctrlr only if it is interactive shell:

if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fi

You can bind also other HSTR commands like --kill-last-command :

if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fi
Bash Vim Keymap

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\e0ihstr -- \C-j"'
Zsh Emacs Keymap

Bind HSTR to a zsh key e.g. to Ctrlr :

bindkey -s "\C-r" "\eqhstr --\n"
Alias

If you want to make running of hstr from command line even easier, then define alias in your ~/.bashrc :

alias hh=hstr

Don't forget to source ~/.bashrc to be able to to use hh command.

Colors

Let HSTR to use colors:

export HSTR_CONFIG=hicolor

or ensure black and white mode:

export HSTR_CONFIG=monochromatic
Default History View

To show normal history by default (instead of metrics-based view, which is default) use:

export HSTR_CONFIG=raw-history-view

To show favorite commands as default view use:

export HSTR_CONFIG=favorites-view
Filtering

To use regular expressions based matching:

export HSTR_CONFIG=regexp-matching

To use substring based matching:

export HSTR_CONFIG=substring-matching

To use keywords (substrings whose order doesn't matter) search matching (default):

export HSTR_CONFIG=keywords-matching

Make search case sensitive (insensitive by default):

export HSTR_CONFIG=case-sensitive

Keep duplicates in raw-history-view (duplicate commands are discarded by default):

export HSTR_CONFIG=duplicates
Static favorites

Last selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and make favorite commands list static, then use the following configuration:

export HSTR_CONFIG=static-favorites
Skip favorites comments

If you don't want to show lines starting with # (comments) among favorites, then use the following configuration:

export HSTR_CONFIG=skip-favorites-comments
Blacklist

Skip commands when processing history i.e. make sure that these commands will not be shown in any view:

export HSTR_CONFIG=blacklist

Commands to be stored in ~/.hstr_blacklist file with trailing empty line. For instance:

cd
my-private-command
ls
ll
Confirm on Delete

Do not prompt for confirmation when deleting history items:

export HSTR_CONFIG=no-confirm
Verbosity

Show a message when deleting the last command from history:

export HSTR_CONFIG=verbose-kill

Show warnings:

export HSTR_CONFIG=warning

Show debug messages:

export HSTR_CONFIG=debug
Bash History Settings

Use the following Bash settings to get most out of HSTR.

Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file size (default value is 500):

export HISTFILESIZE=10000
export HISTSIZE=${HISTFILESIZE}

Ensure syncing (flushing and reloading) of .bash_history with in-memory history:

export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"

Force appending of in-memory history to .bash_history (instead of overwriting):

shopt -s histappend

Use leading space to hide commands from history:

export HISTCONTROL=ignorespace

Suitable for a sensitive information like passwords.

zsh History Settings

If you use zsh , set HISTFILE environment variable in ~/.zshrc :

export HISTFILE=~/.zsh_history
Examples

More colors with case sensitive search of history:

export HSTR_CONFIG=hicolor,case-sensitive

Favorite commands view in black and white with prompt at the bottom of the screen:

export HSTR_CONFIG=favorites-view,prompt-bottom

Keywords based search in colors with debug mode verbosity:

export HSTR_CONFIG=keywords-matching,hicolor,debug

[Nov 17, 2018] hh command man page

Later was renamed to hstr
Notable quotes:
"... By default it parses .bash-history file that is filtered as you type a command substring. ..."
"... Favorite and frequently used commands can be bookmarked ..."
Nov 17, 2018 | www.mankier.com

hh -- easily view, navigate, sort and use your command history with shell history suggest box.

Synopsis

hh [option] [arg1] [arg2]...
hstr [option] [arg1] [arg2]...

Description

hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

Options
-h --help
Show help
-n --non-interactive
Print filtered history on standard output and exit
-f --favorites
Show favorites view immediately
-s --show-configuration
Show configuration that can be added to ~/.bashrc
-b --show-blacklist
Show blacklist of commands to be filtered out before history processing
-V --version
Show version information
Keys
pattern
Type to filter shell history.
Ctrl-e
Toggle regular expression and substring search.
Ctrl-t
Toggle case sensitive search.
Ctrl-/ , Ctrl-7
Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
Ctrl-f
Add currently selected command to favorites.
Ctrl-l
Make search pattern lowercase or uppercase.
Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
Navigate in the history list.
TAB , RIGHT arrow
Choose currently selected item for completion and let user to edit it on the command prompt.
LEFT arrow
Choose currently selected item for completion and let user to edit it in editor (fix command).
ENTER
Choose currently selected item for completion and execute it.
DEL
Remove currently selected item from the shell history.
BACSKSPACE , Ctrl-h
Delete last pattern character.
Ctrl-u , Ctrl-w
Delete pattern and search again.
Ctrl-x
Write changes to shell history and exit.
Ctrl-g
Exit with empty prompt.
Environment Variables

hh defines the following environment variables:

HH_CONFIG
Configuration options:

hicolor
Get more colors with this option (default is monochromatic).

monochromatic
Ensure black and white view.

prompt-bottom
Show prompt at the bottom of the screen (default is prompt at the top).

regexp
Filter command history using regular expressions (substring match is default)

substring
Filter command history using substring.

keywords
Filter command history using keywords - item matches if contains all keywords in pattern in any order.

casesensitive
Make history filtering case sensitive (it's case insensitive by default).

rawhistory
Show normal history as a default view (metric-based view is shown otherwise).

favorites
Show favorites as a default view (metric-based view is shown otherwise).

duplicates
Show duplicates in rawhistory (duplicates are discarded by default).

blacklist
Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

big-keys-skip
Skip big history entries i.e. very long lines (default).

big-keys-floor
Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

big-keys-exit
Exit (fail) on presence of a big key in history (big keys are skipped by default).

warning
Show warning.

debug
Show debug information.

Example:
export HH_CONFIG=hicolor,regexp,rawhistory

HH_PROMPT
Change prompt string which is user@host$ by default.

Example:
export HH_PROMPT="$ "

Files
~/.hh_favorites
Bookmarked favorite commands.
~/.hh_blacklist
Command blacklist.
Bash Configuration

Optionally add the following lines to ~/.bashrc:

export HH_CONFIG=hicolor         # get more colors
shopt -s histappend              # append new history items to .bash_history
export HISTCONTROL=ignorespace   # leading space hides commands from history
export HISTFILESIZE=10000        # increase history file size (default is 500)
export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
# if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi

The prompt command ensures synchronization of the history between BASH memory and history file.

ZSH Configuration

Optionally add the following lines to ~/.zshrc:

export HISTFILE=~/.zsh_history   # ensure history file visibility
export HH_CONFIG=hicolor         # get more colors
bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
Examples
hh git
Start `hh` and show only history items containing 'git'.
hh --non-interactive git
Print history items containing 'git' to standard output and exit.
hh --show-configuration >> ~/.bashrc
Append default hh configuration to your Bash profile.
hh --show-blacklist
Show blacklist configured for history processing.
Author

Written by Martin Dvorak <martin.dvorak@mindforger.com>

Bugs

Report bugs to https://github.com/dvorka/hstr/issues

See Also

history(1), bash(1), zsh(1)

Referenced By

The man page hstr(1) is an alias of hh(1).

[Nov 08, 2018] Technology Detox The Health Benefits of Unplugging Unwinding by Sara Tipton

Notable quotes:
"... Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. ..."
Nov 07, 2018 | www.zerohedge.com

Authored by Sara Tipton via ReadyNutrition.com,

Recent studies have shown that 90% of Americans use digital devices for two or more hours each day and the average American spends more time a day on high-tech devices than they do sleeping: 8 hours and 21 minutes to be exact. If you've ever considered attempting a "digital detox", there are some health benefits to making that change and a few tips to make things a little easier on yourself.

Many Americans are on their phones rather than playing with their children or spending quality family time together. Some people give up technology, or certain aspects of it, such as social media for varying reasons, and there are some shockingly terrific health benefits that come along with that type of a detox from technology. In fact, more and more health experts and medical professionals are suggesting a periodic digital detox; an extended period without those technology gadgets. Studies continue to show that a digital detox, has proven to be beneficial for relationships, productivity, physical health, and mental health. If you find yourself overly stressed or unproductive or generally disengaged from those closest to you, it might be time to unplug.

DIGITAL ADDICTION RESOLUTION

It may go unnoticed but there are many who are actually addicted to their smartphones or tablet. It could be social media or YouTube videos, but these are the people who never step away. They are the ones with their face in their phone while out to dinner with their family. They can't have a quiet dinner without their phone on the table. We've seen them at the grocery store aimlessly pushing around a cart while ignoring their children and scrolling on their phone. A whopping 83% of American teenagers claim to play video games while other people are in the same room and 92% of teens report to going online daily . 24% of those users access the internet via laptops, tablets, and mobile devices.

Addiction therapists who treat gadget-obsessed people say their patients aren't that different from other kinds of addicts. Whereas alcohol, tobacco, and drugs involve a substance that a user's body gets addicted to, in behavioral addiction, it's the mind's craving to turn to the smartphone or the Internet. Taking a break teaches us that we can live without constant stimulation, and lessens our dependence on electronics. Trust us: that Facebook message with a funny meme attached or juicy tidbit of gossip can wait.

IMPROVE RELATIONSHIPS AND BE MORE PERSONABLE

Another benefit to keeping all your electronics off is that it will allow you to establish good mannerisms and people skills and build your relationships to a strong level of connection. If you have ever sat across someone at the dinner table who made more phone contact than eye contact, you know it feels to take a backseat to a screen. Cell phones and other gadgets force people to look down and away from their surroundings, giving them a closed off and inaccessible (and often rude) demeanor. A digital detox has the potential of forcing you out of that unhealthy comfort zone. It could be a start toward rebuilding a struggling relationship too. In a Forbes study , 3 out of 5 people claimed that they spend more time on their digital devices than they do with their partners. This can pose a real threat to building and maintaining real-life relationships. The next time you find yourself going out on a dinner date, try leaving your cell phone and other devices at home and actually have a conversation. Your significant other will thank you.

BETTER SLEEP AND HEALTHIER EATING HABITS

The sleep interference caused by these high-tech gadgets is another mental health concern. The stimulation caused by artificial light can make you feel more awake than you really are, which can potentially interfere with your sleep quality. It is recommended that you give yourself at least two hours of technology-free time before bedtime. The "blue light" has been shown to interfere with sleeping patterns by inhibiting melatonin (the hormone which controls our sleep/wake cycle known as circadian rhythm) production. Try shutting off your phone after dinner and leaving it in a room other than your bedroom. Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. This will help your body readjust to a normal and healthy sleep schedule.

Your eating habits can also suffer if you spend too much time checking your newsfeed. The Rochester Institute of Technology released a study that revealed students are more likely to eat while staring into digital media than they are to eat at a dinner table. This means that eating has now become a multi-tasking activity, rather than a social and loving experience in which healthy foods meant to sustain the body are consumed. This can prevent students from eating consciously, which promotes unhealthy eating habits such as overeating and easy choices, such as a bag of chips as opposed to washing and peeling some carrots. Whether you're an overworked college student checking your Facebook, or a single bachelor watching reruns of The Office , a digital detox is a great way to promote healthy and conscious eating.

IMPROVE OVERALL MENTAL HEALTH

Social media addicts experience a wide array of emotions when looking at the photos of Instagram models and the exercise regimes of others who live in exotic locations. These emotions can be mentally draining and psychologically unhealthy and lead to depression. Smartphone use has been linked to loneliness, shyness, and less engagement at work. In other words, one may have many "social media friends" while being lonely and unsatisfied because those friends are only accessible through their screen. Start by limiting your time on social media. Log out of all social media accounts. That way, you've actually got to log back in if you want to see what that Parisian Instagram vegan model is up to.

If you feel like a detox is in order but don't know how to go about it, start off small. Try shutting off your phone after dinner and don't turn it back on until after breakfast. Keep your phone in another room besides your bedroom overnight. If you use your phone as an alarm clock, buy a cheap alarm clock to use instead to lessen your dependence on your phone. Boredom is often the biggest factor in the beginning stages of a detox, but try playing an undistracted board game with your children, leaving your phone at home during a nice dinner out, or playing with a pet. All of these things are not only good for you but good for your family and beloved furry critter as well!

[Oct 10, 2018] Bash History Display Date And Time For Each Command

Oct 10, 2018 | www.cyberciti.biz
  1. Abhijeet Vaidya says: March 11, 2010 at 11:41 am End single quote is missing.
    Correct command is:
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile 
  2. izaak says: March 12, 2010 at 11:06 am I would also add
    $ echo 'export HISTSIZE=10000' >> ~/.bash_profile

    It's really useful, I think.

  3. Dariusz says: March 12, 2010 at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:

    # Make sure all terminals save history
    shopt -s histappend histreedit histverify
    shopt -s no_empty_cmd_completion # bash>=2.04 only

    # Whenever displaying the prompt, write the previous line to disk:
    PROMPT_COMMAND='history -a'

    #Use GREP color features by default: This will highlight the matched words / regexes
    export GREP_OPTIONS='–color=auto'
    export GREP_COLOR='1;37;41′

  4. Babar Haq says: March 15, 2010 at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to log the IP that command was run from?
    Thanks in advance.
    1. Anthony says: August 21, 2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):

      export HISTTIMEFORMAT="%F %T : $(echo $SSH_CONNECTION | cut -d\ -f1) : "

      will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing that out, Cadrian, I never knew about that before), all right there in your history output.

      You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's not helpful).

  5. cadrian says: March 16, 2010 at 5:55 pm Yup, you can export one of this

    env | grep SSH
    SSH_CLIENT=192.168.78.22 42387 22
    SSH_TTY=/dev/pts/0
    SSH_CONNECTION=192.168.78.22 42387 192.168.36.76 22

    As their bash history filename

    set |grep -i hist
    HISTCONTROL=ignoreboth
    HISTFILE=/home/cadrian/.bash_history
    HISTFILESIZE=1000000000
    HISTSIZE=10000000

    So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)

  6. TSI says: March 21, 2010 at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file of bash: http://tiswww.case.edu/php/chet/bash/NEWS
    If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.
  7. Dinesh Jadhav says: November 12, 2010 at 11:00 am This is good command, It helps me a lot.
  8. Indie says: September 19, 2011 at 11:41 am You only need to use
    export HISTTIMEFORMAT='%F %T '

    in your .bash_profile

  9. lalit jain says: October 3, 2011 at 9:58 am -- show history with date & time

    # HISTTIMEFORMAT='%c '
    #history

  10. Sohail says: January 13, 2012 at 7:05 am Hi
    Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's) timestamp.

    Please advice.

    Regards

    1. Raymond says: March 15, 2012 at 9:05 am Hi Sohail,

      Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers your concern.

      Thanks!

      1. Raymond says: March 15, 2012 at 9:08 am Hi Sohail,

        Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.

        Thanks!

  11. Sohail says: February 24, 2012 at 6:45 am Hi

    The command only lists the current date (Today) even for those commands which were executed on earlier days.

    Any solutions ?

    Regards

  12. nitiratna nikalje says: August 24, 2012 at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota and system administration is over.iptables ,sendmail and dns is remaining.

    -9029917299(Nitiratna)

  13. JMathew says: August 26, 2012 at 10:51 pm Hi,

    Is there anyway to log username also along with the Command Which we typed

    Thanks in Advance

  14. suresh says: May 22, 2013 at 1:42 pm How can i get full comman along with data and path as we het in history command.
  15. rajesh says: December 6, 2013 at 5:56 am Thanks it worked..
  16. Krishan says: February 7, 2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where as I ran the some command three before.

    How come it is displaying the today date

  17. PR says: April 29, 2014 at 5:18 pm Hi..

    I want to collect the history of particular user everyday and want to send an email.I wrote below script.
    for collecting everyday history by time shall i edit .profile file of that user
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
    Script:

    #!/bin/bash
    #This script sends email of particular user
    history >/tmp/history
    if [ -s /tmp/history ]
    then
           mailx -s "history 29042014"  </tmp/history
               fi
    rm /tmp/history
    #END OF THE SCRIPT
    

    Can any one suggest better way to collect particular user history for everyday

  18. lefty.crupps says: October 24, 2014 at 7:10 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo 'export HISTTIMEFORMAT="%YY-%m-%d/ %T "' >> ~/.bashrc

    Thanks for the tip!

    1. lefty.crupps says: October 24, 2014 at 7:11 pm please delete post 33, my command is messed up.
  19. lefty.crupps says: October 24, 2014 at 7:11 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo ‘export HISTTIMEFORMAT=â€%Y-%m-%d %T “‘ >> ~/.bashrc

    Thanks for the tip!

  20. Vanathu says: October 30, 2014 at 1:01 am its show only current date for all the command history
    1. lefty.crupps says: October 30, 2014 at 2:08 am it's marking all of your current history with today's date. Try checking again in a few days.
  21. tinu says: October 14, 2015 at 3:30 pm Hi All,

    I Have enabled my history with the command given :
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile

    i need to know how i can add the ip's also , from which the commands are fired to the system.