Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Orthodox Editors Programmable Keyboards
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Unix System Monitoring Nagios in Large Enterprise Environment Sudoer File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Open source politics: IBM acquires Red Hat Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Tivoli Enterprise Console Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bosos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workaholism and Burnout Unix Sysadmin Tips Sysadmin Horror Stories Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1.  Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of 'efficiency" which in reality is more connected with the size of top brass bonuses then anything related to IT datacenter functioning.   Sysadmin over 50 are especially vulnerable category here and in case the are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. often the only job they can find is job  as Home Depot, or similar retail outlets. 
  3. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). Potemkin-style  culture often prevails in evaluation of software in large US corporations. The surface sheen is more important than the substance. The marketing brochures and manuals are no different from mainstream news media in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; ; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  4. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.   That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has more tremendously power in the decision-making process and eats up more of the corporate budget while the people who do the actual work are increasingly ignored and their share of the budget shrinks.
  5. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.    They are accompanied by the new cultural obsession with ‘character’ (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink,   and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them (which of course is the mark of any good sysadmin, but should not the only source of new knowledge  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM

Nobody is now surprised to see a server with 128GB of RAM, laptop with  16Gb of RAM, or cellphones with  4GB of RAM and 1GHZ CPU (Please not that IBM Pc stated with 1 MBof RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Such changes while  painful are inevitable and hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes. .

 The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place like NYC or SF rent and property prices and valuations are growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Sep 18, 2019] the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

Sep 18, 2019 | www.moonofalabama.org

A.L. , Sep 18 2019 19:56 utc | 31

@30 David G

perhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.

I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....

[Sep 16, 2019] Artistic Style - Index

Sep 16, 2019 | astyle.sourceforge.net

Artistic Style 3.1 A Free, Fast, and Small Automatic Formatter
for C, C++, C++/CLI, Objective‑C, C#, and Java Source Code

Project Page: http://astyle.sourceforge.net/
SourceForge: http://sourceforge.net/projects/astyle/

Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI, Objective‑C, C# and Java programming languages.

When indenting source code, we as programmers have a tendency to use both spaces and tab characters to create the wanted indentation. Moreover, some editors by default insert spaces instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability to "pretty up" lines by automatically setting up the white space before the code on the line, possibly inserting spaces in code that up to now used only tabs for indentation.

The NUMBER of spaces for each tab character in the source code can change between editors (unless the user sets up the number to his liking...). One of the standard problems programmers face when moving from one editor to another is that code containing both spaces and tabs, which was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take care to ONLY use spaces or tabs, looking at other people's source code can still be problematic.

To address this problem, Artistic Style was created – a filter written in C++ that automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java source files. It can be used from a command line, or it can be incorporated as a library in another program.

[Sep 16, 2019] Usage -- PrettyPrinter 0.18.0 documentation

Sep 16, 2019 | prettyprinter.readthedocs.io

Usage

Install the package with pip :

pip install prettyprinter

Then, instead of

from pprint import pprint

do

from prettyprinter import cpprint

for colored output. For colorless output, remove the c prefix from the function name:

from prettyprinter import pprint

[Sep 16, 2019] JavaScript code prettifier

Sep 16, 2019 | github.com

Announcement: Action required rawgit.com is going away .

An embeddable script that makes source-code snippets in HTML prettier.

[Sep 16, 2019] Pretty-print for shell script

Sep 16, 2019 | stackoverflow.com

Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Sep 9 at 7:47

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(mktemp -u "XXXXXXXXXX");
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

> ,

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 16, 2019] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Notable quotes:
"... Have a look at the HTML Tidy Project: http://www.html-tidy.org/ ..."
Sep 16, 2019 | stackoverflow.com

nisetama ,Aug 12 at 10:33

I'm looking for recommendations for HTML pretty printers which fulfill the following requirements:

> ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

[Sep 14, 2019] The Man Who Could Speak Japanese

This impostor definitely demonstrated programming abilities, although at the time there was not such ter :-)
Notable quotes:
"... "We wrote it down. ..."
"... The next phrase was: ..."
"... " ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' " ..."
"... " ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' " ..."
"... "Tong what ?" rasped the colonel. ..."
"... "Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?" ..."
"... Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction. ..."
"... https://www.americanheritage.com/man-who-could-speak-japanese ..."
Sep 14, 2019 | www.nakedcapitalism.com

Wukchumni , September 13, 2019 at 4:29 pm

Re: Fake list of grunge slang:

a fabulous tale of the South Pacific by William Manchester

The Man Who Could Speak Japanese

"We wrote it down.

The next phrase was:

" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "

Then:

" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "

"Tong what ?" rasped the colonel.

"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"

Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.

https://www.americanheritage.com/man-who-could-speak-japanese

[Sep 13, 2019] How to setup nrpe for client side monitoring - LinuxConfig.org

Sep 13, 2019 | linuxconfig.org

... ... ...

We can also include our own custom configuration file(s) in our custom packages, thus allowing updating client monitoring configuration in a centralized and automated way. Keeping that in mind, we'll configure the client in /etc/nrpe.d/custom.cfg on all distributions in the following examples.

NRPE does not accept any commands other then localhost by default. This is for security reasons. To allow command execution from a server, we need to set the server's IP address as an allowed address. In our case the server is a Nagios server, with IP address 10.101.20.34 . We add the following to our client configuration:

allowed_hosts=10.101.20.34

me name=


Multiple addresses or hostnames can be added, separated by commas. Note that the above logic requires static address for the monitoring server. Using dhcp on the monitoring server will surely break your configuration, if you use IP address here. The same applies to the scenario where you use hostnames, and the client can't resolve the server's hostname.

Configuring a custom check on the server and client side

To demonstrate our monitoring setup's capabilites, let's say we would like to know if the local postfix system delivers a mail on a client for user root . The mail could contain a cronjob output, some report, or something that is written to the STDERR and is delivered as a mail by default. For instance, abrt sends a crash report to root by default on a process crash. We did not setup a mail relay, but we still would like to know if a mail arrives. Let's write a custom check to monitor that.

  1. Our first piece of the puzzle is the check itself. Consider the following simple bash script called check_unread_mail :
    #!/bin/bash
    
    USER=root
    
    if [ "$(command -v finger >> /dev/null; echo $?)" -gt 0 ]; then
            echo "UNKNOWN: utility finger not found"
            exit 3
    fi
    if [ "$(id "$USER" >> /dev/null ; echo $?)" -gt 0 ]; then
            echo "UNKNOWN: user $USER does not exist"
            exit 3
    fi
    ## check for mail
    if [ "$(finger -pm "$USER" | tail -n 1 | grep -ic "No mail.")" -gt 0 ]; then
            echo "OK: no unread mail for user $USER"
            exit 0
    else
            echo "WARNING: unread mail for user $USER"
            exit 1
    fi
    

    This simple check uses the finger utility to check for unread mail for user root . Output of the finger -pm may vary by version and thus distribution, so some adjustments may be needed.

    For example on Fedora 30, last line of the output of finger -pm <username> is "No mail.", but on openSUSE Leap 15.1 it would be "No Mail." (notice the upper case Mail). In this case the grep -i handles this difference, but it shows well that when working with different distributions and versions, some additional work may be needed.

  2. We'll need finger to make this check work. The package's name is the same on all distributions, so we can install it with apt , zypper , dnf or yum .
  3. We need to set the check executable:
    # chmod +x check_unread_mail
    
  4. We'll place the check into the /usr/lib64/nagios/plugins directory, the common place for nrpe checks. We'll reference it later.
  5. We'll call our command check_mail_root . Let's place another line into our custom client configuration, where we tell nrpe what commands we accept, and what need to be done when a given command arrives:
    command[check_mail_root]=/usr/lib64/nagios/plugins/check_unread_mail
    
  6. With this our client configuration is complete. We can start the service on the client with systemd . The service name is nagios-nrpe-server on Debian derivatives, and simply nrpe on other distributions.
    # systemctl start nagios-nrpe-server
    # systemctl status nagios-nrpe-server
    ● nagios-nrpe-server.service - Nagios Remote Plugin Executor
       Loaded: loaded (/lib/systemd/system/nagios-nrpe-server.service; enabled; vendor preset: enabled)
       Active: active (running) since Tue 2019-09-10 13:03:10 CEST; 1min 51s ago
         Docs: http://www.nagios.org/documentation
     Main PID: 3782 (nrpe)
        Tasks: 1 (limit: 3549)
       CGroup: /system.slice/nagios-nrpe-server.service
               └─3782 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -f
    
    szept 10 13:03:10 mail-test-client systemd[1]: Started Nagios Remote Plugin Executor.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Starting up daemon
    szept 10 13:03:10 mail-test-client nrpe[3782]: Server listening on 0.0.0.0 port 5666.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Server listening on :: port 5666.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Listening for connections on port 5666
    

    me name=


  7. Now we can configure the server side. If we don't have one already, we can define a command that calls a remote nrpe instance with a command as it's sole argument:
    # this command runs a program $ARG1$ with no arguments
    define command {
            command_name    check_nrpe_1arg
            command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -t 60 -c $ARG1$ 2>/dev/null
    }
    
  8. We also define the client as a host:
    define host {
            use                     linux-server
            host_name               mail-test-client
            alias                   mail-test-client
            address                 mail-test-client
    }
    
    The address can be an IP address or hostname. In the later case we need to ensure it can be resolved by the monitoring server.
  9. We can define a service on the above host using the Nagios side command and the client side command:
    define service {
            use                        generic-service
            host_name                  mail-test-client
            service_description        OS:unread mail for root
            check_command              check_nrpe_1arg!check_mail_root
    }
    
    These adjustments can be placed to any configuration file the Nagios server reads on startup, but it is a good practice to keep configuration files tidy.
  10. We verify our new Nagios configuration:
    # nagios -v /etc/nagios/nagios.cfg
    
    If "Things look okay", we can apply the configuration with a server reload:

[Sep 12, 2019] 9 Best File Comparison and Difference (Diff) Tools for Linux

Sep 12, 2019 | www.tecmint.com

3. Kompare

Kompare is a diff GUI wrapper that allows users to view differences between files and also merge them.

Some of its features include:

  1. Supports multiple diff formats
  2. Supports comparison of directories
  3. Supports reading diff files
  4. Customizable interface
  5. Creating and applying patches to source files
Kompare Tool - Compare Two Files in Linux <img aria-describedby="caption-attachment-21311" src="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png" alt="Kompare Tool - Compare Two Files in Linux" width="1097" height="701" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png 1097w, https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux-768x491.png 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Kompare Tool – Compare Two Files in Linux

Visit Homepage : https://www.kde.org/applications/development/kompare/

4. DiffMerge

DiffMerge is a cross-platform GUI application for comparing and merging files. It has two functionality engines, the Diff engine which shows the difference between two files, which supports intra-line highlighting and editing and a Merge engine which outputs the changed lines between three files.

It has got the following features:

  1. Supports directory comparison
  2. File browser integration
  3. Highly configurable
DiffMerge - Compare Files in Linux <img aria-describedby="caption-attachment-21312" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png" alt="DiffMerge - Compare Files in Linux" width="1078" height="700" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png 1078w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux-768x499.png 768w" sizes="(max-width: 1078px) 100vw, 1078px" />

DiffMerge – Compare Files in Linux

Visit Homepage : https://sourcegear.com/diffmerge/

5. Meld – Diff Tool

Meld is a lightweight GUI diff and merge tool. It enables users to compare files, directories plus version controlled programs. Built specifically for developers, it comes with the following features:

  1. Two-way and three-way comparison of files and directories
  2. Update of file comparison as a users types more words
  3. Makes merges easier using auto-merge mode and actions on changed blocks
  4. Easy comparisons using visualizations
  5. Supports Git, Mercurial, Subversion, Bazaar plus many more
Meld - A Diff Tool to Compare File in Linux <img aria-describedby="caption-attachment-21313" src="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png" alt="Meld - A Diff Tool to Compare File in Linux" width="1028" height="708" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png 1028w, https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux-768x529.png 768w" sizes="(max-width: 1028px) 100vw, 1028px" />

Meld – A Diff Tool to Compare File in Linux

Visit Homepage : http://meldmerge.org/

6. Diffuse – GUI Diff Tool

Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use on Linux. Written in Python, It offers two major functionalities, that is: file comparison and version control, allowing file editing, merging of files and also output the difference between files.

You can view a comparison summary, select lines of text in files using a mouse pointer, match lines in adjacent files and edit different file. Other features include:

  1. Syntax highlighting
  2. Keyboard shortcuts for easy navigation
  3. Supports unlimited undo
  4. Unicode support
  5. Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
DiffUse - A Tool to Compare Text Files in Linux <img aria-describedby="caption-attachment-21314" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png" alt="DiffUse - A Tool to Compare Text Files in Linux" width="1030" height="795" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png 1030w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux-768x593.png 768w" sizes="(max-width: 1030px) 100vw, 1030px" />

DiffUse – A Tool to Compare Text Files in Linux

Visit Homepage : http://diffuse.sourceforge.net/

7. XXdiff – Diff and Merge Tool

XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff is its lack of support for unicode files and inline editing of diff files.

It has the following list of features:

  1. Shallow and recursive comparison of two, three file or two directories
  2. Horizontal difference highlighting
  3. Interactive merging of files and saving of resulting output
  4. Supports merge reviews/policing
  5. Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
  6. Extensible using scripts
  7. Fully customizable using resource file plus many other minor features
xxdiff Tool <img aria-describedby="caption-attachment-21315" src="https://www.tecmint.com/wp-content/uploads/2016/07/xxdiff-Tool.png" alt="xxdiff Tool" width="718" height="401" />

xxdiff Tool

Visit Homepage : http://furius.ca/xxdiff/

8. KDiff3 – – Diff and Merge Tool

KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works on all Unix-like platforms including Linux and Mac OS X, Windows.

It can compare or merge two to three files or directories and has the following notable features:

  1. Indicates differences line by line and character by character
  2. Supports auto-merge
  3. In-built editor to deal with merge-conflicts
  4. Supports Unicode, UTF-8 and many other codecs
  5. Allows printing of differences
  6. Windows explorer integration support
  7. Also supports auto-detection via byte-order-mark "BOM"
  8. Supports manual alignment of lines
  9. Intuitive GUI and many more
KDiff3 Tool for Linux <img aria-describedby="caption-attachment-21418" src="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png" alt="KDiff3 Tool for Linux" width="950" height="694" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png 950w, https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux-768x561.png 768w" sizes="(max-width: 950px) 100vw, 950px" />

KDiff3 Tool for Linux

Visit Homepage : http://kdiff3.sourceforge.net/

9. TkDiff

TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides a side-by-side view of the differences between two input files. It can run on Linux, Windows and Mac OS X.

Additionally, it has some other exciting features including diff bookmarks, a graphical map of differences for easy and quick navigation plus many more.

Visit Homepage : https://sourceforge.net/projects/tkdiff/

Having read this review of some of the best file and directory comparator and merge tools, you probably want to try out some of them. These may not be the only diff tools available you can find on Linux, but they are known to offer some the best features, you may also want to let us know of any other diff tools out there that you have tested and think deserve to be mentioned among the best.

[Sep 11, 2019] string - Extract substring in Bash - Stack Overflow

Sep 11, 2019 | stackoverflow.com

Jeff ,May 8 at 18:30

Given a filename in the form someletters_12345_moreleters.ext , I want to extract the 5 digits and put them into a variable.

So to emphasize the point, I have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.

I am very interested in the number of different ways that this can be accomplished.

Berek Bryan ,Jan 24, 2017 at 9:30

Use cut :
echo 'someletters_12345_moreleters.ext' | cut -d'_' -f 2

More generic:

INPUT='someletters_12345_moreleters.ext'
SUBSTRING=$(echo $INPUT| cut -d'_' -f 2)
echo $SUBSTRING

JB. ,Jan 6, 2015 at 10:13

If x is constant, the following parameter expansion performs substring extraction:
b=${a:12:5}

where 12 is the offset (zero-based) and 5 is the length

If the underscores around the digits are the only ones in the input, you can strip off the prefix and suffix (respectively) in two steps:

tmp=${a#*_}   # remove prefix ending in "_"
b=${tmp%_*}   # remove suffix starting with "_"

If there are other underscores, it's probably feasible anyway, albeit more tricky. If anyone knows how to perform both expansions in a single expression, I'd like to know too.

Both solutions presented are pure bash, with no process spawning involved, hence very fast.

A Sahra ,Mar 16, 2017 at 6:27

Generic solution where the number can be anywhere in the filename, using the first of such sequences:
number=$(echo $filename | egrep -o '[[:digit:]]{5}' | head -n1)

Another solution to extract exactly a part of a variable:

number=${filename:offset:length}

If your filename always have the format stuff_digits_... you can use awk:

number=$(echo $filename | awk -F _ '{ print $2 }')

Yet another solution to remove everything except digits, use

number=$(echo $filename | tr -cd '[[:digit:]]')

sshow ,Jul 27, 2017 at 17:22

In case someone wants more rigorous information, you can also search it in man bash like this
$ man bash [press return key]
/substring  [press return key]
[press "n" key]
[press "n" key]
[press "n" key]
[press "n" key]

Result:

${parameter:offset}
       ${parameter:offset:length}
              Substring Expansion.  Expands to  up  to  length  characters  of
              parameter  starting  at  the  character specified by offset.  If
              length is omitted, expands to the substring of parameter  start‐
              ing at the character specified by offset.  length and offset are
              arithmetic expressions (see ARITHMETIC  EVALUATION  below).   If
              offset  evaluates  to a number less than zero, the value is used
              as an offset from the end of the value of parameter.  Arithmetic
              expressions  starting  with  a - must be separated by whitespace
              from the preceding : to be distinguished from  the  Use  Default
              Values  expansion.   If  length  evaluates to a number less than
              zero, and parameter is not @ and not an indexed  or  associative
              array,  it is interpreted as an offset from the end of the value
              of parameter rather than a number of characters, and the  expan‐
              sion is the characters between the two offsets.  If parameter is
              @, the result is length positional parameters beginning at  off‐
              set.   If parameter is an indexed array name subscripted by @ or
              *, the result is the length members of the array beginning  with
              ${parameter[offset]}.   A  negative  offset is taken relative to
              one greater than the maximum index of the specified array.  Sub‐
              string  expansion applied to an associative array produces unde‐
              fined results.  Note that a negative offset  must  be  separated
              from  the  colon  by  at least one space to avoid being confused
              with the :- expansion.  Substring indexing is zero-based  unless
              the  positional  parameters are used, in which case the indexing
              starts at 1 by default.  If offset  is  0,  and  the  positional
              parameters are used, $0 is prefixed to the list.

Aleksandr Levchuk ,Aug 29, 2011 at 5:51

Building on jor's answer (which doesn't work for me):
substring=$(expr "$filename" : '.*_\([^_]*\)_.*')

kayn ,Oct 5, 2015 at 8:48

I'm surprised this pure bash solution didn't come up:
a="someletters_12345_moreleters.ext"
IFS="_"
set $a
echo $2
# prints 12345

You probably want to reset IFS to what value it was before, or unset IFS afterwards!

zebediah49 ,Jun 4 at 17:31

Here's how i'd do it:
FN=someletters_12345_moreleters.ext
[[ ${FN} =~ _([[:digit:]]{5})_ ]] && NUM=${BASH_REMATCH[1]}

Note: the above is a regular expression and is restricted to your specific scenario of five digits surrounded by underscores. Change the regular expression if you need different matching.

TranslucentCloud ,Jun 16, 2014 at 13:27

Following the requirements

I have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.

I found some grep ways that may be useful:

$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]+" 
12345

or better

$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]{5}" 
12345

And then with -Po syntax:

$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d+' 
12345

Or if you want to make it fit exactly 5 characters:

$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d{5}' 
12345

Finally, to make it be stored in a variable it is just need to use the var=$(command) syntax.

Darron ,Jan 9, 2009 at 16:13

Without any sub-processes you can:
shopt -s extglob
front=${input%%_+([a-zA-Z]).*}
digits=${front##+([a-zA-Z])_}

A very small variant of this will also work in ksh93.

user2350426

add a comment ,Aug 5, 2014 at 8:11
If we focus in the concept of:
"A run of (one or several) digits"

We could use several external tools to extract the numbers.
We could quite easily erase all other characters, either sed or tr:

name='someletters_12345_moreleters.ext'

echo $name | sed 's/[^0-9]*//g'    # 12345
echo $name | tr -c -d 0-9          # 12345

But if $name contains several runs of numbers, the above will fail:

If "name=someletters_12345_moreleters_323_end.ext", then:

echo $name | sed 's/[^0-9]*//g'    # 12345323
echo $name | tr -c -d 0-9          # 12345323

We need to use regular expresions (regex).
To select only the first run (12345 not 323) in sed and perl:

echo $name | sed 's/[^0-9]*\([0-9]\{1,\}\).*$/\1/'
perl -e 'my $name='$name';my ($num)=$name=~/(\d+)/;print "$num\n";'

But we could as well do it directly in bash (1) :

regex=[^0-9]*([0-9]{1,}).*$; \
[[ $name =~ $regex ]] && echo ${BASH_REMATCH[1]}

This allows us to extract the FIRST run of digits of any length
surrounded by any other text/characters.

Note : regex=[^0-9]*([0-9]{5,5}).*$; will match only exactly 5 digit runs. :-)

(1) : faster than calling an external tool for each short texts. Not faster than doing all processing inside sed or awk for large files.

codist ,May 6, 2011 at 12:50

Here's a prefix-suffix solution (similar to the solutions given by JB and Darron) that matches the first block of digits and does not depend on the surrounding underscores:
str='someletters_12345_morele34ters.ext'
s1="${str#"${str%%[[:digit:]]*}"}"   # strip off non-digit prefix from str
s2="${s1%%[^[:digit:]]*}"            # strip off non-digit suffix from s1
echo "$s2"                           # 12345

Campa ,Oct 21, 2016 at 8:12

I love sed 's capability to deal with regex groups:
> var="someletters_12345_moreletters.ext"
> digits=$( echo $var | sed "s/.*_\([0-9]\+\).*/\1/p" -n )
> echo $digits
12345

A slightly more general option would be not to assume that you have an underscore _ marking the start of your digits sequence, hence for instance stripping off all non-numbers you get before your sequence: s/[^0-9]\+\([0-9]\+\).*/\1/p .


> man sed | grep s/regexp/replacement -A 2
s/regexp/replacement/
    Attempt to match regexp against the pattern space.  If successful, replace that portion matched with replacement.  The replacement may contain the special  character  &  to
    refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp.

More on this, in case you're not too confident with regexps:

All escapes \ are there to make sed 's regexp processing work.

Dan Dascalescu ,May 8 at 18:28

Given test.txt is a file containing "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
cut -b19-20 test.txt > test1.txt # This will extract chars 19 & 20 "ST" 
while read -r; do;
> x=$REPLY
> done < test1.txt
echo $x
ST

Alex Raj Kaliamoorthy ,Jul 29, 2016 at 7:41

My answer will have more control on what you want out of your string. Here is the code on how you can extract 12345 out of your string
str="someletters_12345_moreleters.ext"
str=${str#*_}
str=${str%_more*}
echo $str

This will be more efficient if you want to extract something that has any chars like abc or any special characters like _ or - . For example: If your string is like this and you want everything that is after someletters_ and before _moreleters.ext :

str="someletters_123-45-24a&13b-1_moreleters.ext"

With my code you can mention what exactly you want. Explanation:

#* It will remove the preceding string including the matching key. Here the key we mentioned is _ % It will remove the following string including the matching key. Here the key we mentioned is '_more*'

Do some experiments yourself and you would find this interesting.

Dan Dascalescu ,May 8 at 18:27

similar to substr('abcdefg', 2-1, 3) in php:
echo 'abcdefg'|tail -c +2|head -c 3

olibre ,Nov 25, 2015 at 14:50

Ok, here goes pure Parameter Substitution with an empty string. Caveat is that I have defined someletters and moreletters as only characters. If they are alphanumeric, this will not work as it is.
filename=someletters_12345_moreletters.ext
substring=${filename//@(+([a-z])_|_+([a-z]).*)}
echo $substring
12345

gniourf_gniourf ,Jun 4 at 17:33

There's also the bash builtin 'expr' command:
INPUT="someletters_12345_moreleters.ext"  
SUBSTRING=`expr match "$INPUT" '.*_\([[:digit:]]*\)_.*' `  
echo $SUBSTRING

russell ,Aug 1, 2013 at 8:12

A little late, but I just ran across this problem and found the following:
host:/tmp$ asd=someletters_12345_moreleters.ext 
host:/tmp$ echo `expr $asd : '.*_\(.*\)_'`
12345
host:/tmp$

I used it to get millisecond resolution on an embedded system that does not have %N for date:

set `grep "now at" /proc/timer_list`
nano=$3
fraction=`expr $nano : '.*\(...\)......'`
$debug nano is $nano, fraction is $fraction

> ,Aug 5, 2018 at 17:13

A bash solution:
IFS="_" read -r x digs x <<<'someletters_12345_moreleters.ext'

This will clobber a variable called x . The var x could be changed to the var _ .

input='someletters_12345_moreleters.ext'
IFS="_" read -r _ digs _ <<<"$input"

[Sep 08, 2019] How to replace spaces in file names using a bash script

Sep 08, 2019 | stackoverflow.com

Ask Question Asked 9 years, 4 months ago Active 2 months ago Viewed 226k times 238 127


Mark Byers ,Apr 25, 2010 at 19:20

Can anyone recommend a safe solution to recursively replace spaces with underscores in file and directory names starting from a given root directory? For example:
$ tree
.
|-- a dir
|   `-- file with spaces.txt
`-- b dir
    |-- another file with spaces.txt
    `-- yet another file with spaces.pdf

becomes:

$ tree
.
|-- a_dir
|   `-- file_with_spaces.txt
`-- b_dir
    |-- another_file_with_spaces.txt
    `-- yet_another_file_with_spaces.pdf

Jürgen Hötzel ,Nov 4, 2015 at 3:03

Use rename (aka prename ) which is a Perl script which may be on your system already. Do it in two steps:
find -name "* *" -type d | rename 's/ /_/g'    # do the directories first
find -name "* *" -type f | rename 's/ /_/g'

Based on Jürgen's answer and able to handle multiple layers of files and directories in a single bound using the "Revision 1.5 1998/12/18 16:16:31 rmb1" version of /usr/bin/rename (a Perl script):

find /tmp/ -depth -name "* *" -execdir rename 's/ /_/g' "{}" \;

oevna ,Jan 1, 2016 at 8:25

I use:
for f in *\ *; do mv "$f" "${f// /_}"; done

Though it's not recursive, it's quite fast and simple. I'm sure someone here could update it to be recursive.

The ${f// /_} part utilizes bash's parameter expansion mechanism to replace a pattern within a parameter with supplied string. The relevant syntax is ${parameter/pattern/string} . See: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html or http://wiki.bash-hackers.org/syntax/pe .

armandino ,Dec 3, 2013 at 20:51

find . -depth -name '* *' \
| while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; done

failed to get it right at first, because I didn't think of directories.

Edmund Elmer ,Jul 3 at 7:12

you can use detox by Doug Harple
detox -r <folder>

Dennis Williamson ,Mar 22, 2012 at 20:33

A find/rename solution. rename is part of util-linux.

You need to descend depth first, because a whitespace filename can be part of a whitespace directory:

find /tmp/ -depth -name "* *" -execdir rename " " "_" "{}" ";"

armandino ,Apr 26, 2010 at 11:49

bash 4.0
#!/bin/bash
shopt -s globstar
for file in **/*\ *
do 
    mv "$file" "${file// /_}"       
done

Itamar ,Jan 31, 2013 at 21:27

you can use this:
    find . -name '* *' | while read fname 

do
        new_fname=`echo $fname | tr " " "_"`

        if [ -e $new_fname ]
        then
                echo "File $new_fname already exists. Not replacing $fname"
        else
                echo "Creating new file $new_fname to replace $fname"
                mv "$fname" $new_fname
        fi
done

yabt ,Apr 26, 2010 at 14:54

Here's a (quite verbose) find -exec solution which writes "file already exists" warnings to stderr:
function trspace() {
   declare dir name bname dname newname replace_char
   [ $# -lt 1 -o $# -gt 2 ] && { echo "usage: trspace dir char"; return 1; }
   dir="${1}"
   replace_char="${2:-_}"
   find "${dir}" -xdev -depth -name $'*[ \t\r\n\v\f]*' -exec bash -c '
      for ((i=1; i<=$#; i++)); do
         name="${@:i:1}"
         dname="${name%/*}"
         bname="${name##*/}"
         newname="${dname}/${bname//[[:space:]]/${0}}"
         if [[ -e "${newname}" ]]; then
            echo "Warning: file already exists: ${newname}" 1>&2
         else
            mv "${name}" "${newname}"
         fi
      done
  ' "${replace_char}" '{}' +
}

trspace rootdir _

degi ,Aug 8, 2011 at 9:10

This one does a little bit more. I use it to rename my downloaded torrents (no special characters (non-ASCII), spaces, multiple dots, etc.).
#!/usr/bin/perl

&rena(`find . -type d`);
&rena(`find . -type f`);

sub rena
{
    ($elems)=@_;
    @t=split /\n/,$elems;

    for $e (@t)
    {
    $_=$e;
    # remove ./ of find
    s/^\.\///;
    # non ascii transliterate
    tr [\200-\377][_];
    tr [\000-\40][_];
    # special characters we do not want in paths
    s/[ \-\,\;\?\+\'\"\!\[\]\(\)\@\#]/_/g;
    # multiple dots except for extension
    while (/\..*\./)
    {
        s/\./_/;
    }
    # only one _ consecutive
    s/_+/_/g;
    next if ($_ eq $e ) or ("./$_" eq $e);
    print "$e -> $_\n";
    rename ($e,$_);
    }
}

Junyeop Lee ,Apr 10, 2018 at 9:44

Recursive version of Naidim's Answers.
find . -name "* *" | awk '{ print length, $0 }' | sort -nr -s | cut -d" " -f2- | while read f; do base=$(basename "$f"); newbase="${base// /_}"; mv "$(dirname "$f")/$(basename "$f")" "$(dirname "$f")/$newbase"; done

ghoti ,Dec 5, 2016 at 21:16

I found around this script, it may be interesting :)
 IFS=$'\n';for f in `find .`; do file=$(echo $f | tr [:blank:] '_'); [ -e $f ] && [ ! -e $file ] && mv "$f" $file;done;unset IFS

ghoti ,Dec 5, 2016 at 21:17

Here's a reasonably sized bash script solution
#!/bin/bash
(
IFS=$'\n'
    for y in $(ls $1)
      do
         mv $1/`echo $y | sed 's/ /\\ /g'` $1/`echo "$y" | sed 's/ /_/g'`
      done
)

user1060059 ,Nov 22, 2011 at 15:15

This only finds files inside the current directory and renames them . I have this aliased.

find ./ -name "* *" -type f -d 1 | perl -ple '$file = $_; $file =~ s/\s+/_/g; rename($_, $file);

Hongtao ,Sep 26, 2014 at 19:30

I just make one for my own purpose. You may can use it as reference.
#!/bin/bash
cd /vzwhome/c0cheh1/dev_source/UB_14_8
for file in *
do
    echo $file
    cd "/vzwhome/c0cheh1/dev_source/UB_14_8/$file/Configuration/$file"
    echo "==> `pwd`"
    for subfile in *\ *; do [ -d "$subfile" ] && ( mv "$subfile" "$(echo $subfile | sed -e 's/ /_/g')" ); done
    ls
    cd /vzwhome/c0cheh1/dev_source/UB_14_8
done

Marcos Jean Sampaio ,Dec 5, 2016 at 20:56

For files in folder named /files
for i in `IFS="";find /files -name *\ *`
do
   echo $i
done > /tmp/list


while read line
do
   mv "$line" `echo $line | sed 's/ /_/g'`
done < /tmp/list

rm /tmp/list

Muhammad Annaqeeb ,Sep 4, 2017 at 11:03

For those struggling through this using macOS, first install all the tools:
 brew install tree findutils rename

Then when needed to rename, make an alias for GNU find (gfind) as find. Then run the code of @Michel Krelin:

alias find=gfind 
find . -depth -name '* *' \
| while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; done

[Sep 07, 2019] As soon as you stop writing code on a regular basis you stop being a programmer. You lose you qualification very quickly. That's a typical tragedy of talented programmers who became mediocre managers or, worse, theoretical computer scientists

Programming skills are somewhat similar to the skills of people who play violin or piano. As soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Sep 07, 2019 | archive.computerhistory.org

Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.

I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.

We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.

He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.

So, programming.

I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.

[Sep 07, 2019] Knuth about computer science and money: At that point I made the decision in my life that I wasn't going to optimize my income;

Sep 07, 2019 | archive.computerhistory.org

So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.

The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."

At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.

[Sep 07, 2019] Knuth: maybe 1 in 50 people have the "computer scientist's" type of intellect

Sep 07, 2019 | conservancy.umn.edu

Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....

Knuth: That is different?

Frana: What are the characteristics?

Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.

Frana: It is the hardest thing to really understand that which you are existing within.

Knuth: Yes.

[Sep 07, 2019] Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together

Sep 07, 2019 | conservancy.umn.edu

Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.

But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.

When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...

[Sep 07, 2019] How to Debug Bash Scripts by Mike Ward

Sep 07, 2019 | linuxconfig.org

05 September 2019

... ... ... How to use other Bash options

The Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the $- variable to see the current state of all the variables.

$ echo $-
himBHs
$ set -xv && echo $-
himvxBHs

There is another useful switch we can use to help us find variables referenced without having any value set. This is the -u switch, and just like -x and -v it can also be used on the command line, as we see in the following example:

set u option at command line <img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting u option at the command line

We mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our -u switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.

We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.

#!/bin/bash

read -p "Path to be added: " $path

if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
results from addpath script <img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Using x option when running your Bash script

In the example above we run the addpath script normally and it simply does not modify our PATH . It does not give us any indication of why or clues to mistakes made. Running it again using the -x option clearly shows us that the left side of our comparison is an empty string. $path is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is $path evaluated to an empty string?"

Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the -u switch,we immediately get a notification that our variable j is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.

#!/bin/bash

for i in 1 2 3
do
        echo $i $j
done
results from count.sh script <img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Using u option running your script from the command line

Now surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting -xv options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.

Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.

#!/bin/bash -x

This will set the -x option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.


me name=


For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.

#!/bin/bash

read -p "Path to be added: " $path

set -xv
if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
set +xv
results from addpath script <img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your script

We surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.

For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.

As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.

One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with -f . Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This -f option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.

#!/bin/bash

echo "ignore fileglobbing option turned off"
ls *

echo "ignore file globbing option set"
set -f
ls *
set +f
results from -f option <img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Using f option to turn off file globbing How to use trap to help debug

There are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.

A simple but useful example you can use in your Bash scripts is to trap on EXIT .

#!/bin/bash

trap 'echo score is $score, status is $status' EXIT

if [ -z  ]; then
        status="default"
else
        status=
fi

score=0
if [ ${USER} = 'superman' ]; then
        score=99
elif [ $# -gt 1 ]; then
        score=
fi
results from using trap EXIT <img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trap EXIT to help debug your script

me name=


As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The EXIT signal obviously does not need an explicit exit statement to be generated; in this case the echo statement is executed when the end of the script is reached.

Another useful trap to use with Bash scripts is DEBUG . This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.

#!/bin/bash

trap 'echo "line ${LINENO}: score is $score"' DEBUG

score=0

if [ "${USER}" = "mike" ]; then
        let "score += 1"
fi

let "score += 1"

if [ "" = "7" ]; then
        score=7
fi
exit 0
results from using trap DEBUG <img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trap DEBUG to help debug your script Conclusion

When you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option -x is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would

[Sep 06, 2019] Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered

Sep 06, 2019 | conservancy.umn.edu

Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.

[Sep 06, 2019] How TAOCP was hatched

Notable quotes:
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Sep 06, 2019 | archive.computerhistory.org

Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.

We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."

The more I thought about it, I decided "Oh yes, I've got this book inside of me."

I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."

As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.

Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.

Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.

Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.

Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.

Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.

Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.

In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.

I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.

So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.

But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.

I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.

Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.

So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.

It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of

[Sep 05, 2019] linux - Directory bookmarking for bash - Stack Overflow

Notable quotes:
"... May you wan't to change this alias to something which fits your needs ..."
Jul 29, 2017 | stackoverflow.com

getmizanur , asked Sep 10 '11 at 20:35

Is there any directory bookmarking utility for bash to allow move around faster on the command line?

UPDATE

Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)

function cdb() {
    USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ;
    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ;
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi    
            ;;
        # list bookmarks
        -l) shift
            ls -l ~/.cd_bookmarks/ ;
            ;;
         *) echo "$USAGE" ;
            ;;
    esac
}

INSTALL

1./ create a file ~/.cdb and copy the above script into it.

2./ in your ~/.bashrc add the following

if [ -f ~/.cdb ]; then
    source ~/.cdb
fi

3./ restart your bash session

USAGE

1./ to create a bookmark

$cd my_project
$cdb -c project1

2./ to goto a bookmark

$cdb -g project1

3./ to list bookmarks

$cdb -l

4./ to delete a bookmark

$cdb -d project1

5./ where are all my bookmarks stored?

$cd ~/.cd_bookmarks

Fredrik Pihl , answered Sep 10 '11 at 20:47

Also, have a look at CDPATH

A colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.

ash$ cd bash-doc
bash: cd: bash-doc: No such file or directory

bash$ CDPATH=/usr/share/doc
bash$ cd bash-doc
/usr/share/doc/bash-doc

bash$ echo $PWD
/usr/share/doc/bash-doc

and

cd -

It's the command-line equivalent of the back button (takes you to the previous directory you were in).

ajreal , answered Sep 10 '11 at 20:41

In bash script/command,
you can use pushd and popd

pushd

Save and then change the current directory. With no arguments, pushd exchanges the top two directories.

Usage

cd /abc
pushd /xxx    <-- save /abc to environment variables and cd to /xxx
pushd /zzz
pushd +1      <-- cd /xxx

popd is to remove the variable (reverse manner)

fgm , answered Sep 11 '11 at 8:28

bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.

Dmitry Frank , answered Jun 16 '15 at 10:22

Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.

The engine is a great, universal tool: command-line fuzzy finder by Junegunn.

It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type cdg command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:

It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings Ctrl+j / Ctrl+k .

Article with details: Fuzzy shortcuts for your shell .

It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.

return42 , answered Feb 6 '15 at 11:56

Inspired by the question and answers here, I added the lines below to my ~/.bashrc file.

With this you have a favdir command (function) to manage your favorites and a autocompletion function to select an item from these favorites.

# ---------
# Favorites
# ---------

__favdirs_storage=~/.favdirs
__favdirs=( "$HOME" )

containsElement () {
    local e
    for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done
    return 1
}

function favdirs() {

    local cur
    local IFS
    local GLOBIGNORE

    case $1 in
        list)
            echo "favorite folders ..."
            printf -- ' - %s\n' "${__favdirs[@]}"
            ;;
        load)
            if [[ ! -e $__favdirs_storage ]] ; then
                favdirs save
            fi
            # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!).
            # mapfile -t __favdirs < $__favdirs_storage
            IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage))
            ;;
        save)
            printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage
            ;;
        add)
            cur=${2-$(pwd)}
            favdirs load
            if containsElement "$cur" "${__favdirs[@]}" ; then
                echo "'$cur' allready exists in favorites"
            else
                __favdirs+=( "$cur" )
                favdirs save
                echo "'$cur' added to favorites"
            fi
            ;;
        del)
            cur=${2-$(pwd)}
            favdirs load
            local i=0
            for fav in ${__favdirs[@]}; do
                if [ "$fav" = "$cur" ]; then
                    echo "delete '$cur' from favorites"
                    unset __favdirs[$i]
                    favdirs save
                    break
                fi
                let i++
            done
            ;;
        *)
            echo "Manage favorite folders."
            echo ""
            echo "usage: favdirs [ list | load | save | add | del ]"
            echo ""
            echo "  list : list favorite folders"
            echo "  load : load favorite folders from $__favdirs_storage"
            echo "  save : save favorite directories to $__favdirs_storage"
            echo "  add  : add directory to favorites [default pwd $(pwd)]."
            echo "  del  : delete directory from favorites [default pwd $(pwd)]."
    esac
} && favdirs load

function __favdirs_compl_command() {
    COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]}))
} && complete -o default -F __favdirs_compl_command favdirs

function __favdirs_compl() {
    local IFS=$'\n'
    COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]}))
}

alias _cd='cd'
complete -F __favdirs_compl _cd

Within the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias ( _cd ) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .

With the function favdirs you can manage your favorites (see usage).

$ favdirs 
Manage favorite folders.

usage: favdirs [ list | load | save | add | del ]

  list : list favorite folders
  load : load favorite folders from ~/.favdirs
  save : save favorite directories to ~/.favdirs
  add  : add directory to favorites [default pwd /tmp ].
  del  : delete directory from favorites [default pwd /tmp ].

Zied , answered Mar 12 '14 at 9:53

Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal article

An example from the article:

% cd ~/Desktop
% s d       # save(bookmark) ~/Desktop as d
% cd /tmp   # go somewhere
% pwd
/tmp
% g d       # go to the desktop
% pwd
/home/Desktop

Al Conrad , answered Sep 4 '15 at 16:10

@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script.
_cdb()
{
    local _script_commands=$(ls -1 ~/.cd_bookmarks/)
    local cur=${COMP_WORDS[COMP_CWORD]}

    COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) )
}
complete -F _cdb cdb


function cdb() {

    local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n
    \t[-h or no args] - prints usage help\n
    \t[-c bookmark] - create bookmark\n
    \t[-d bookmark] - delete bookmark\n
    \t[-g bookmark] - goto bookmark\n
    \t[-l] - list bookmarks\n
    \t[-s bookmark] - show bookmark location\n
    \t[bookmark] - same as [-g bookmark]\n
    Press tab for bookmark completion.\n"        

    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1"
                complete -F _cdb cdb
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # show bookmark
        -s) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                cat ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi
            ;;
        # list bookmarks
        -l) shift
            ls -1 ~/.cd_bookmarks/ ;
            ;;
        -h) echo -e $USAGE ;
            ;;
        # goto bookmark by default
        *)
            if [ -z "$1" ] ; then
                echo -e $USAGE
            elif [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
    esac
}

tobimensch , answered Jun 5 '16 at 21:31

Yes, one that I have written, that is called anc.

https://github.com/tobimensch/anc

Anc stands for anchor, but anc's anchors are really just bookmarks.

It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.

I'm actively working on it and open to input on how to make it better.

Allow me to paste the examples from anc's github page here:

# make the current directory the default anchor:
$ anc s

# go to /etc, then /, then /usr/local and then back to the default anchor:
$ cd /etc; cd ..; cd usr/local; anc

# go back to /usr/local :
$ anc b

# add another anchor:
$ anc a $HOME/test

# view the list of anchors (the default one has the asterisk):
$ anc l
(0) /path/to/first/anchor *
(1) /home/usr/test

# jump to the anchor we just added:
# by using its anchor number
$ anc 1
# or by jumping to the last anchor in the list
$ anc -1

# add multiple anchors:
$ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first

# use text matching to jump to $HOME/projects/first
$ anc pro fir

# use text matching to jump to $HOME/documents/first
$ anc doc fir

# add anchor and jump to it using an absolute path
$ anc /etc
# is the same as
$ anc a /etc; anc -1

# add anchor and jump to it using a relative path
$ anc ./X11 #note that "./" is required for relative paths
# is the same as
$ anc a X11; anc -1

# using wildcards you can add many anchors at once
$ anc a $HOME/projects/*

# use shell completion to see a list of matching anchors
# and select the one you want to jump to directly
$ anc pro[TAB]

Cảnh Toàn Nguyễn , answered Feb 20 at 5:41

Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:
s <bookmark_name> - Saves the current directory as "bookmark_name"
g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name"
p <bookmark_name> - Prints the directory associated with "bookmark_name"
d <bookmark_name> - Deletes the bookmark
l                 - Lists all available bookmarks

,

For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):
function b() {
    alias $1="cd `pwd -P`"
}

Usage:

In any directory that you want to bookmark type

b THEDIR # <THEDIR> being the name of your 'bookmark'

It will create an alias to cd (back) to here.

To return to a 'bookmarked' dir type

THEDIR

It will run the stored alias and cd back there.

Caution: Use only if you understand that this might override existing shell aliases and what that means.

[Sep 04, 2019] Basic Trap for File Cleanup

Sep 04, 2019 | www.putorius.net

Basic Trap for File Cleanup

Using an trap to cleanup is simple enough. Here is an example of using trap to clean up a temporary file on exit of the script.

#!/bin/bash
trap "rm -f /tmp/output.txt" EXIT
yum -y update > /tmp/output.txt
if grep -qi "kernel" /tmp/output.txt; then
     mail -s "KERNEL UPDATED" user@example.com < /tmp/output.txt
fi

NOTE: It is important that the trap statement be placed at the beginning of the script to function properly. Any commands above the trap can exit and not be caught in the trap.

Now if the script exits for any reason, it will still run the rm command to delete the file. Here is an example of me sending SIGINT (CTRL+C) while the script was running.

# ./test.sh
 ^Cremoved '/tmp/output.txt'

NOTE: I added verbose ( -v ) output to the rm command so it prints "removed". The ^C signifies where I hit CTRL+C to send SIGINT.

This is a much cleaner and safer way to ensure the cleanup occurs when the script exists. Using EXIT ( 0 ) instead of a single defined signal (i.e. SIGINT – 2) ensures the cleanup happens on any exit, even successful completion of the script.

[Sep 04, 2019] Exec - Process Replacement Redirection in Bash by Steven Vona

Sep 02, 2019 | www.putorius.net

The Linux exec command is a bash builtin and a very interesting utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally. If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the builtin exec command, what it does and how to use it.

Table of Contents

Basics of the Sub-Shell

In order to understand the exec command, you need a fundamental understanding of how sub-shells work.

... ... ...

What the Exec Command Does

In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.

An additional feature of the exec command, is redirection and manipulation of file descriptors . Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read " Linux IO, Standard Streams and Redirection " to get acquainted with these terms and functions.

In the following sections we will expand on both of these functions and try to demonstrate how to use them.

How to Use the Exec Command with Examples

Let's look at some examples of how to use the exec command and it's options.

Basic Exec Command Usage – Replacement of Process

If you call exec and supply a command without any options, it simply replaces the shell with command .

Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was 17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you will see the tail process replaced the bash process (same process ID).

Linux terminal screenshot showing the exec command replacing a parent process instead of creating a sub-shell.
Screenshot 3

Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.

Exec Command Options

If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following command:

exec -l tail -f /etc/redhat-release

It would produce the following output in the process list. Notice the highlighted dash in the CMD column.

The -c option causes the supplied command to run with a empty environment. Environmental variables like PATH , are cleared before the command it run. Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with the exec -c option.

animated gif showing the exec command output with the -c option supplied.

In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no output to the printenv command when ran with exec.

The last option, -a [name], will pass name as the first argument to command . The command will still run as expected, but the name of the process will change. In this next example we opened a second terminal and ran the following command:

exec -a PUTORIUS tail -f /etc/redhat-release

Here is the process list showing the results of the above command:

Linux terminal screenshot showing the exec command using the -a option to replace the name of the first argument
Screenshot 5

As you can see, exec passed PUTORIUS as first argument to command , therefore it shows in the process list with that name.

Using the Exec Command for Redirection & File Descriptor Manipulation

The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It will exist for the life of the shell or until it is explicitly stopped.

If no command is specified, redirections may be used to affect the current shell environment.

– Bash Manual

Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into redirection and file descriptors is outside the scope of this tutorial. Please read " Linux IO, Standard Streams and Redirection " for a good primer and see the resources section for more information.

Redirect all standard output (STDOUT) to a file:
exec >file

In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively stops the redirection. Using the cat command we can see that the file contains all the redirected output.

Screenshot of Linux terminal using exec to redirect all standard output to a file
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion

In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and file descriptor manipulation.

In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries. Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.

I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please feel free to leave a comment below with anything on your mind.

Resources

[Sep 03, 2019] bash - How to convert strings like 19-FEB-12 to epoch date in UNIX - Stack Overflow

Feb 11, 2013 | stackoverflow.com

Asked 6 years, 6 months ago Active 2 years, 2 months ago Viewed 53k times 24 4

hellish ,Feb 11, 2013 at 3:45

In UNIX how to convert to epoch milliseconds date strings like:
19-FEB-12
16-FEB-12
05-AUG-09

I need this to compare these dates with the current time on the server.

> ,

To convert a date to seconds since the epoch:
date --date="19-FEB-12" +%s

Current epoch:

date +%s

So, since your dates are in the past:

NOW=`date +%s`
THEN=`date --date="19-FEB-12" +%s`

let DIFF=$NOW-$THEN
echo "The difference is: $DIFF"

Using BSD's date command, you would need

$ date -j -f "%d-%B-%y" 19-FEB-12 +%s

Differences from GNU date :

  1. -j prevents date from trying to set the clock
  2. The input format must be explicitly set with -f
  3. The input date is a regular argument, not an option (viz. -d )
  4. When no time is specified with the date, use the current time instead of midnight.

[Sep 03, 2019] Linux - UNIX Convert Epoch Seconds To the Current Time - nixCraft

Sep 03, 2019 | www.cyberciti.biz

Print Current UNIX Time

Type the following command to display the seconds since the epoch:

date +%s

date +%s

Sample outputs:
1268727836

Convert Epoch To Current Time

Type the command:

date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"

date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"

Sample outputs:

Tue Mar 16 13:53:56 IST 2010

Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+). To convert number of seconds back to a more readable form, use a command like this:

date -d @1268727836 +"%d-%m-%Y %T %z"

date -d @1268727836 +"%d-%m-%Y %T %z"

Sample outputs:

16-03-2010 13:53:56 +0530

[Sep 03, 2019] command line - How do I convert an epoch timestamp to a human readable format on the cli - Unix Linux Stack Exchange

Sep 03, 2019 | unix.stackexchange.com

Gilles ,Oct 11, 2010 at 18:14

date -d @1190000000 Replace 1190000000 with your epoch

Stefan Lasiewski ,Oct 11, 2010 at 18:04

$ echo 1190000000 | perl -pe 's/(\d+)/localtime($1)/e' 
Sun Sep 16 20:33:20 2007

This can come in handy for those applications which use epoch time in the logfiles:

$ tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
[Thu May 13 10:15:46 2010] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;HOSTA;check_raid;0;check_raid.pl: OK (Unit 0 on Controller 0 is OK)

Stéphane Chazelas ,Jul 31, 2015 at 20:24

With bash-4.2 or above:
printf '%(%F %T)T\n' 1234567890

(where %F %T is the strftime() -type format)

That syntax is inspired from ksh93 .

In ksh93 however, the argument is taken as a date expression where various and hardly documented formats are supported.

For a Unix epoch time, the syntax in ksh93 is:

printf '%(%F %T)T\n' '#1234567890'

ksh93 however seems to use its own algorithm for the timezone and can get it wrong. For instance, in Britain, it was summer time all year in 1970, but:

$ TZ=Europe/London bash -c 'printf "%(%c)T\n" 0'
Thu 01 Jan 1970 01:00:00 BST
$ TZ=Europe/London ksh93 -c 'printf "%(%c)T\n" "#0"'
Thu Jan  1 00:00:00 1970

DarkHeart ,Jul 28, 2014 at 3:56

Custom format with GNU date :
date -d @1234567890 +'%Y-%m-%d %H:%M:%S'

Or with GNU awk :

awk 'BEGIN { print strftime("%Y-%m-%d %H:%M:%S", 1234567890); }'

Linked SO question: https://stackoverflow.com/questions/3249827/convert-from-unixtime-at-command-line

,

The two I frequently use are:
$ perl -leprint\ scalar\ localtime\ 1234567890
Sat Feb 14 00:31:30 2009

[Sep 03, 2019] Time conversion using Bash Vanstechelman.eu

Sep 03, 2019 | www.vanstechelman.eu

Time conversion using Bash This article show how you can obtain the UNIX epoch time (number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also shows how you can convert a UNIX epoch time to a human readable time.

Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a format string as parameter to the date command. The format string for UNIX epoch time is '%s'.

lode@srv-debian6:~$ date "+%s"
1234567890

To convert a specific date and time into UNIX epoch time, use the -d parameter. The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX epoch time.

lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075

Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to reformat a UNIX epoch time into a human readable time. The syntax is the following:

lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009

The same thing can also be achieved using a bit of perl programming:

lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009

Please note that the printed time is formatted in the timezone in which your Linux system is configured. My system is configured in UTC+2, you can get another output for the same command.

[Sep 03, 2019] Run PerlTidy to beautify the code

Notable quotes:
"... Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a . ..."
Sep 03, 2019 | perlmaven.com

The Code-TidyAll distribution provides a command line script called tidyall that will use Perl::Tidy to change the layout of the code.

This tandem needs 2 configuration file.

The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a Perl-file. We used the following file copied from the source code of the Perl Maven project.

-pbp
-nst
-et=4
--maximum-line-length=120

# Break a line after opening/before closing token.
-vt=0
-vtc=0

The tidyall command uses a separate file called .tidyallrc that describes which files need to be beautified.

[PerlTidy]
select = {lib,t}/**/*.{pl,pm,t}
select = Makefile.PL
select = {mod2html,podtree2html,pods2html,perl2html}
argv = --profile=$ROOT/.perltidyrc

[SortLines]
select = .gitignore
Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a .

That created a directory called .tidyall.d/ where it stores cached versions of the files, and changed all the files that were matches by the select statements in the .tidyallrc file.

Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the repository and ran tidyall -a again to make sure the .gitignore file is sorted.

[Sep 02, 2019] bash - Pretty-print for shell script

Oct 21, 2010 | stackoverflow.com

Pretty-print for shell script Ask Question Asked 8 years, 10 months ago Active 30 days ago Viewed 14k times


Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Aug 11 at 4:08

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1);
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

Pius Raeder ,Jan 10, 2017 at 8:35

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 02, 2019] mvdan-sh A shell parser, formatter, and interpreter (POSIX-Bash-mksh)

Written in Go language
Sep 02, 2019 | github.com
go parser shell bash formatter posix mksh interpreter bash-parser beautify
  1. Go 98.8%
  2. Other 1.2%
Type Name Latest commit message Commit time
Failed to load latest commit information.
_fuzz/ it
_js
cmd
expand
fileutil
interp
shell
syntax
.gitignore
.travis.yml
LICENSE
README.md
go.mod
go.sum
release-docker.sh
README.md

sh

A shell parser, formatter and interpreter. Supports POSIX Shell , Bash and mksh . Requires Go 1.11 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples .

For high-level operations like performing shell expansions on strings, see the shell examples .

shfmt

Go 1.11 and later can download the latest v2 stable release:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt

The latest v3 pre-release can be downloaded in a similar manner, using the /v3 module:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt

Finally, any older release can be built with their respective older Go versions by manually cloning, checking out a tag, and running go build ./cmd/shfmt .

shfmt formats shell programs. It can use tabs or any number of spaces to indent. See canonical.sh for a quick look at its default style.

You can feed it standard input, any number of files or any number of directories to recurse into. When recursing, it will operate on .sh and .bash files and ignore files starting with a period. It will also operate on files with no extension and a shell shebang.

shfmt -l -w script.sh

Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere to the format:

shfmt -d .

Use -i N to indent with a number of spaces instead of tabs. There are other formatting options - see shfmt -h . For example, to get the formatting appropriate for Google's Style guide, use shfmt -i 2 -ci .

Packages are available on Arch , CRUX , Docker , FreeBSD , Homebrew , NixOS , Scoop , Snapcraft , and Void .

Replacing bash -n

bash -n can be useful to check for syntax errors in shell scripts. However, shfmt >/dev/null can do a better job as it checks for invalid UTF-8 and does all parsing statically, including checking POSIX Shell validity:

$ echo '${foo:1 2}' | bash -n
$ echo '${foo:1 2}' | shfmt
1:9: not a valid arithmetic operator: 2
$ echo 'foo=(1 2)' | bash --posix -n
$ echo 'foo=(1 2)' | shfmt -p
1:5: arrays are a bash feature

gosh

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/gosh

Experimental shell that uses interp . Work in progress, so don't expect stability just yet.

Fuzzing

This project makes use of go-fuzz to find crashes and hangs in both the parser and the printer. To get started, run:

git checkout fuzz
./fuzz

Caveats

$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh . See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

Related projects

[Sep 01, 2019] Three Ways to Exclude Specific-Certain Packages from Yum Update by Magesh Maruthamuthu

Sep 01, 2019 | www.2daygeek.com

Three Ways to Exclude Specific Packages from Yum Update

· Published : August 28, 2019 || Last Updated: August 31, 2019

Method 1 : Exclude Packages with yum Command Manually or Temporarily

We can use --exclude or -x switch with yum command to exclude specific packages from getting updated through yum command.

This is a temporary method or On-Demand method. If you want to exclude specific package only once then we can use this method.

The below command will update all packages except kernel.

To exclude single package.

# yum update --exclude=kernel

or

# yum update -x 'kernel'

To exclude multiple packages. The below command will update all packages except kernel and php.

# yum update --exclude=kernel* --exclude=php*

or

# yum update --exclude httpd,php
Method-2: Exclude Packages with yum Command Permanently

If you are frequently performing the patch update,You can use this permanent method.

To do so, add the required packages in /etc/yum.conf to disable packages updates permanently.

Once you add an entry, you don't need to specify these package each time you run yum update command. Also, this prevents packages from any accidental update.

# vi /etc/yum.conf

[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
exclude=kernel* php*
Method-3: Exclude Packages Using Yum versionlock plugin

This is also permanent method similar to above. Yum versionlock plugin allow users to lock specified packages from being updated through yum command.

To do so, run the following command. The below command will exclude the freetype package from yum update.

You can also add the package entry directly in "/etc/yum/pluginconf.d/versionlock.list" file.

# yum versionlock add freetype

Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
Adding versionlock on: 0:freetype-2.8-12.el7
versionlock added: 1

Use the below command to check the list of packages locked by versionlock plugin.

# yum versionlock list

Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
0:freetype-2.8-12.el7.*
versionlock list done

Run the following command to discards the list.

# yum versionlock clear

[Aug 31, 2019] Linux on your laptop A closer look at EFI boot options

Aug 31, 2019 | www.zdnet.com
Before EFI, the standard boot process for virtually all PC systems was called "MBR", for Master Boot Record; today you are likely to hear it referred to as "Legacy Boot". This process depended on using the first physical block on a disk to hold some information needed to boot the computer (thus the name Master Boot Record); specifically, it held the disk address at which the actual bootloader could be found, and the partition table that defined the layout of the disk. Using this information, the PC firmware could find and execute the bootloader, which would then bring up the computer and run the operating system.

This system had a number of rather obvious weaknesses and shortcomings. One of the biggest was that you could only have one bootable object on each physical disk drive (at least as far as the firmware boot was concerned). Another was that if that first sector on the disk became corrupted somehow, you were in deep trouble.

Over time, as part of the Extensible Firmware Interface, a new approach to boot configuration was developed. Rather than storing critical boot configuration information in a single "magic" location, EFI uses a dedicated "EFI boot partition" on the desk. This is a completely normal, standard disk partition, the same as which may be used to hold the operating system or system recovery data.

The only requirement is that it be FAT formatted, and it should have the boot and esp partition flags set (esp stands for EFI System Partition). The specific data and programs necessary for booting is then kept in directories on this partition, typically in directories named to indicate what they are for. So if you have a Windows system, you would typically find directories called 'Boot' and 'Microsoft' , and perhaps one named for the manufacturer of the hardware, such as HP. If you have a Linux system, you would find directories called opensuse, debian, ubuntu, or any number of others depending on what particular Linux distribution you are using.

It should be obvious from the description so far that it is perfectly possible with the EFI boot configuration to have multiple boot objects on a single disk drive.

Before going any further, I should make it clear that if you install Linux as the only operating system on a PC, it is not necessary to know all of this configuration information in detail. The installer should take care of setting all of this up, including creating the EFI boot partition (or using an existing EFI boot partition), and further configuring the system boot list so that whatever system you install becomes the default boot target.

If you were to take a brand new computer with UEFI firmware, and load it from scratch with any of the current major Linux distributions, it would all be set up, configured, and working just as it is when you purchase a new computer preloaded with Windows (or when you load a computer from scratch with Windows). It is only when you want to have more than one bootable operating system – especially when you want to have both Linux and Windows on the same computer – that things may become more complicated.

The problems that arise with such "multiboot" systems are generally related to getting the boot priority list defined correctly.

When you buy a new computer with Windows, this list typically includes the Windows bootloader on the primary disk, and then perhaps some other peripheral devices such as USB, network interfaces and such. When you install Linux alongside Windows on such a computer, the installer will add the necessary information to the EFI boot partition, but if the boot priority list is not changed, then when the system is rebooted after installation it will simply boot Windows again, and you are likely to think that the installation didn't work.

There are several ways to modify this boot priority list, but exactly which ones are available and whether or how they work depends on the firmware of the system you are using, and this is where things can get really messy. There are just about as many different UEFI firmware implementations as there are PC manufacturers, and the manufacturers have shown a great deal of creativity in the details of this firmware.

First, in the simplest case, there is a software utility included with Linux called efibootmgr that can be used to modify, add or delete the boot priority list. If this utility works properly, and the changes it makes are permanent on the system, then you would have no other problems to deal with, and after installing it would boot Linux and you would be happy. Unfortunately, while this is sometimes the case it is frequently not. The most common reason for this is that changes made by software utilities are not actually permanently stored by the system BIOS, so when the computer is rebooted the boot priority list is restored to whatever it was before, which generally means that Windows gets booted again.

The other common way of modifying the boot priority list is via the computer BIOS configuration program. The details of how to do this are different for every manufacturer, but the general procedure is approximately the same. First you have to press the BIOS configuration key (usually F2, but not always, unfortunately) during system power-on (POST). Then choose the Boot item from the BIOS configuration menu, which should get you to a list of boot targets presented in priority order. Then you need to modify that list; sometimes this can be done directly in that screen, via the usual F5/F6 up/down key process, and sometimes you need to proceed one level deeper to be able to do that. I wish I could give more specific and detailed information about this, but it really is different on every system (sometimes even on different systems produced by the same manufacturer), so you just need to proceed carefully and figure out the steps as you go.

I have seen a few rare cases of systems where neither of these methods works, or at least they don't seem to be permanent, and the system keeps reverting to booting Windows. Again, there are two ways to proceed in this case. The first is by simply pressing the "boot selection" key during POST (power-on). Exactly which key this is varies, I have seen it be F12, F9, Esc, and probably one or two others. Whichever key it turns out to be, when you hit it during POST you should get a list of bootable objects defined in the EFI boot priority list, so assuming your Linux installation worked you should see it listed there. I have known of people who were satisfied with this solution, and would just use the computer this way and have to press boot select each time they wanted to boot Linux.

The alternative is to actually modify the files in the EFI boot partition, so that the (unchangeable) Windows boot procedure would actually boot Linux. This involves overwriting the Windows file bootmgfw.efi with the Linux file grubx64.efi. I have done this, especially in the early days of EFI boot, and it works, but I strongly advise you to be extremely careful if you try it, and make sure that you keep a copy of the original bootmgfw.efi file. Finally, just as a final (depressing) warning, I have also seen systems where this seemed to work, at least for a while, but then at some unpredictable point the boot process seemed to notice that something had changed and it restored bootmgfw.efi to its original state – thus losing the Linux boot configuration again. Sigh.

So, that's the basics of EFI boot, and how it can be configured. But there are some important variations possible, and some caveats to be aware of.

[Aug 31, 2019] Programming is about Effective Communication

Aug 31, 2019 | developers.slashdot.org

Anonymous Coward , Friday February 22, 2019 @02:42PM ( #58165060 )

Algorithms, not code ( Score: 4 , Insightful)

Sad to see these are all books about coding and coding style. Nothing at all here about algorithms, or data structures.

My vote goes for Algorithms by Sedgewick

Seven Spirals ( 4924941 ) , Friday February 22, 2019 @02:57PM ( #58165150 )
MOTIF Programming by Marshall Brain ( Score: 3 )

Amazing how little memory and CPU MOTIF applications take. Once you get over the callbacks, it's actually not bad!

Seven Spirals ( 4924941 ) writes:
Re: ( Score: 2 )

Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my experience. You probably know this, but after OpenMOTIF was completed and revved a few times the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and some just strange visual artifacts) disappeared. I know a lot of people love QT and it's produced real apps and real results - I won't poo-poo it. How

SuperKendall ( 25149 ) writes:
Design and Evolution of C++ ( Score: 2 )

Even if you don't like C++ much, The Design and Evolution of C++ [amazon.com] is a great book for understanding why pretty much any language ends up the way it does, seeing the tradeoffs and how a language comes to grow and expand from simple roots. It's way more interesting to read than you might expect (not very dry, and more about human interaction than you would expect).

Other than that reading through back posts in a lot of coding blogs that have been around a long time is probably a really good idea.

Also a side re

shanen ( 462549 ) writes:
What about books that hadn't been written yet? ( Score: 2 )

You young whippersnappers don't 'preciate how good you have it!

Back in my day, the only book about programming was the 1401 assembly language manual!

But seriously, folks, it's pretty clear we still don't know shite about how to program properly. We have some fairly clear success criteria for improving the hardware, but the criteria for good software are clear as mud, and the criteria for ways to produce good software are much muddier than that.

Having said that, I will now peruse the thread rather carefully

shanen ( 462549 ) writes:
TMI, especially PII ( Score: 2 )

Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is the latest online... Also remember a couple of his language manuals. Probably used the Common Lisp one the most...

Didn't find any mention of a lot of books that I consider highly relevant, but that may reflect my personal bias towards history. Not really relevant for most programmers.

TMI, but if I open up my database on all t

UnknownSoldier ( 67820 ) , Friday February 22, 2019 @03:52PM ( #58165532 )
Programming is about **Effective Communication** ( Score: 5 , Insightful)

I've been programming for the past ~40 years and I'll try to summarize what I believe are the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO: Be A Great Programmer " summary. (I'll get to the books section in a bit.)

1. All code can be summarized as a trinity of 3 fundamental concepts:

* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B

2. ~80% of programming is NOT about code; it is about Effective Communication. Whether that be:

* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns, etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general public

The other ~20% is effective time management and design. A good programmer knows how to budget their time. Programming is about balancing the three conflicting goals of the Program Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality. Pick two.

3. Stages of a Programmer

There are two old jokes:

In Lisp all code is data. In Haskell all data is code.

And:

Progression of a (Lisp) Programmer:

* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.

(Attributed to Aristotle Pagaltzis)

The point of these jokes is that as you work with systems you start to realize that a data-driven process can often greatly simplify things.

4. Know Thy Data

Fred Books once wrote

"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall continue to be mystified; show me your tables (domain model) and I won't usually need your flowcharts (source code): they'll be obvious."

A more modern version would read like this:

Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.

The importance of data can't be understated:

* Optimization STARTS with understanding HOW the data is being generated and used, NOT the code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of GB of data every second. Analyzing that data is import to spot trends and potential problems.

5. There are three levels of optimizations. From slowest to fastest run-time:

a) Bit-twiddling hacks [stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms [wikipedia.org] (such as Big-O notation)
c) Data-Orientated Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction and data caches matter. Optimize for the common case, NOT the single case that OOP tends to favor.

Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is manipulated in the first place.

6. Fundamental Reading

Since the OP specifically asked about books -- there are lots of great ones. The ones that have impressed me that I would mark as "required" reading:

* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov

(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software as that leads to typical, bloated, over-engineered crap. The main problem with "Design Patterns" is that a programmer will often get locked into a mindset of seeing everything as a pattern -- even when a simple few lines of code would solve th eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered CRC code [boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are probably looking for a BETTER HASHING function with less collisions. You probably would be better off using a DIFFERENT algorithm such as SHA-2, etc.

7. Do NOT copy-pasta

Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type out ALL code so you actually THINK about what you are writing.

8. K.I.S.S.

Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple, Silly.

9. Use DESCRIPTIVE variable names

You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive variable names. Far too programmers write usless comments and don't understand the difference between code and comments:

Code says HOW, Comments say WHY

A crap comment will say something like: // increment i

No, Shit Sherlock! Don't comment the obvious!

A good comment will say something like: // BUGFIX: 1234: Work-around issues caused by A, B, and C.

10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With apologies to JWZ)

TINSTAAFL.

11. Learn Multi-Paradigm programming [wikipedia.org].

If you don't understand both the pros and cons of these programming paradigms ...

* Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design

... then you will never really understand programming, nor abstraction, at a deep level, along with how and when it should and shouldn't be used.

12. Multi-disciplinary POV

ALL non-trivial code has bugs. If you aren't using static code analysis [wikipedia.org] then you are not catching as many bugs as the people who are.

Also, a good programmer looks at his code from many different angles. As a programmer you must put on many different hats to find them:

* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY it did BEFORE you checked your code into version control?

13. Learn multiple Programming Languages

Each language was designed to solve certain problems. Learning different languages, even ones you hate, will expose you to different concepts. e.g. If you don't how how to read assembly language AND your high level language then you will never be as good as the programmer who does both.

14. Respect your Colleagues' and Consumers Time, Space, and Money.

Mobile game are the WORST at respecting people's time, space and money turning "players into payers." They treat customers as whales. Don't do this. A practical example: If you are a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!

15. Be Passionate

If you aren't passionate about programming, that is, you are only doing it for the money, it will show. Take some pride in doing a GOOD job.

16. Perfect Practice Makes Perfect.

If you aren't programming every day you will never be as good as someone who is. Programming is about solving interesting problems. Practice solving puzzles to develop your intuition and lateral thinking. The more you practice the better you get.

"Sorry" for the book but I felt it was important to summarize the "essentials" of programming.

--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few characters per line (currently 37.0)."

raymorris ( 2726007 ) , Friday February 22, 2019 @05:39PM ( #58166230 ) Journal
Shared this with my team ( Score: 4 , Insightful)

You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.

You mentioned code can data. Linus Torvalds had this to say:

"I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful [â¦] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

I'm inclined to agree. Once the data structure is right, the code oftem almost writes itself. It'll be easy to write and easy to read because it's obvious how one would handle data structured in that elegant way.

Writing the code necessary to transform the data from the input format into the right structure can be non-obvious, but it's normally worth it.

[Aug 31, 2019] Slashdot Asks How Did You Learn How To Code - Slashdot

Aug 31, 2019 | ask.slashdot.org

GreatDrok ( 684119 ) , Saturday June 04, 2016 @10:03PM ( #52250917 ) Journal

Programming, not coding ( Score: 5 , Interesting)

i learnt to program at school from a Ph.D computer scientist. We never even had computers in the class. We learnt to break the problem down into sections using flowcharts or pseudo-code and then we would translate that program into whatever coding language we were using. I still do this usually in my notebook where I figure out all the things I need to do and then write the skeleton of the code using a series of comments for what each section of my program and then I fill in the code for each section. It is a combination of top down and bottom up programming, writing routines that can be independently tested and validated.

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
EOF

chmod +x /tmp/demo-space-separated.sh

/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com
Bash Equals-Separated (e.g., --option=argument ) (without getopt[s])

Usage demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi
EOF

chmod +x /tmp/demo-equals-separated.sh

/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Method #2: Using bash with getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

getopt(1) limitations (older, relatively-recent getopt versions):

More recent getopt versions don't have these limitations.

Additionally, the POSIX shell (and others) offer getopts which doesn't have these limitations. I've included a simplistic getopts example.

Usage demo-getopts.sh -vf /etc/hosts foo bar

cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF

chmod +x /tmp/demo-getopts.sh

/tmp/demo-getopts.sh -vf /etc/hosts foo bar

output from copy-pasting the block above:

verbose=1, output_file='/etc/hosts', Leftovers: foo bar

The advantages of getopts are:

  1. It's more portable, and will work in other shells like dash .
  2. It can handle multiple single options like -vf filename in the typical Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without additional code.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

johncip ,Jul 23, 2018 at 15:15

No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores -⁠vfd style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset

# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null 
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
    echo 'I'm sorry, `getopt --test` failed in this environment.'
    exit 1
fi

OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose

# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via   -- "$@"   to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
    # e.g. return value is 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or sudo port install getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

Tobias Kienzler ,Mar 19, 2016 at 15:23

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Robert Siemer ,Jun 1, 2018 at 1:57

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

hfossli ,Jan 31 at 20:05

More succinct way

script.sh

#!/bin/bash

while [[ "$#" -gt 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

bronson ,Apr 27 at 23:22

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

Robert Siemer ,Jun 6, 2016 at 19:28

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

> ,

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

[Aug 29, 2019] shell - An example of how to use getopts in bash - Stack Overflow

The key thing to understand is that getops is just parsing options. You need to shift them as a separate operation:
shift $((OPTIND-1))
May 10, 2013 | stackoverflow.com

An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132

chepner ,May 10, 2013 at 13:42

I want to call myscript file in this way:
$ ./myscript -s 45 -p any_string

or

$ ./myscript -h >>> should display help
$ ./myscript    >>> should display help

My requirements are:

I tried so far this code:

#!/bin/bash
while getopts "h:s:" arg; do
  case $arg in
    h)
      echo "usage" 
      ;;
    s)
      strength=$OPTARG
      echo $strength
      ;;
  esac
done

But with that code I get errors. How to do it with Bash and getopt ?

,

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

[Aug 28, 2019] How to Replace Spaces in Filenames with Underscores on the Linux Shell

You probably would be better off with -nv options for mv
Aug 28, 2019 | vitux.com
$ for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done

[Aug 28, 2019] 9 Quick 'mv' Command Practical Examples in Linux

Aug 28, 2019 | www.linuxbuzz.com

Example:5) Do not overwrite existing file at destination (mv -n)

Use '-n' option in mv command in case if we don't want to overwrite an existing file at destination,

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 09:59 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 10:10 tools.txt
[linuxbuzz@web ~]$

As we can see tools.txt is present in our current working directory and in /tmp/sysadmin, use below mv command to avoid overwriting at destination,

[linuxbuzz@web ~]$ mv -n tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$
Example:6) Forcefully overwrite write protected file at destination (mv -f)

Use '-f' option in mv command to forcefully overwrite the write protected file at destination. Let's assumes we have a file named " bands.txt " in our present working directory and in /tmp/sysadmin.

[linuxbuzz@web ~]$ ls -l bands.txt /tmp/sysadmin/bands.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 bands.txt
-r--r--r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$

As we can see under /tmp/sysadmin, bands.txt is write protected file,

Without -f option

[linuxbuzz@web ~]$ mv bands.txt /tmp/sysadmin/bands.txt

mv: try to overwrite '/tmp/sysadmin/bands.txt', overriding mode 0444 (r–r–r–)?

To forcefully overwrite, use below mv command,

[linuxbuzz@web ~]$ mv -f bands.txt /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$
Example:7) Verbose output of mv command (mv -v)

Use '-v' option in mv command to print the verbose output, example is shown below

[linuxbuzz@web ~]$ mv -v  buzz51.txt buzz52.txt buzz53.txt buzz54.txt /tmp/sysadmin/
'buzz51.txt' -> '/tmp/sysadmin/buzz51.txt'
'buzz52.txt' -> '/tmp/sysadmin/buzz52.txt'
'buzz53.txt' -> '/tmp/sysadmin/buzz53.txt'
'buzz54.txt' -> '/tmp/sysadmin/buzz54.txt'
[linuxbuzz@web ~]$
Example:8) Create backup at destination while using mv command (mv -b)

Use '-b' option to take backup of a file at destination while performing mv command, at destination backup file will be created with tilde character appended to it, example is shown below,

[linuxbuzz@web ~]$ mv -b buzz55.txt /tmp/sysadmin/buzz55.txt
[linuxbuzz@web ~]$ ls -l /tmp/sysadmin/buzz55.txt*
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:47 /tmp/sysadmin/buzz55.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:37 /tmp/sysadmin/buzz55.txt~
[linuxbuzz@web ~]$
Example:9) Move file only when its newer than destination (mv -u)

There are some scenarios where we same file at source and destination and we wan to move the file only when file at source is newer than the destination, so to accomplish, use -u option in mv command. Example is shown below

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 55 Aug 25 00:55 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 87 Aug 25 00:57 tools.txt
[linuxbuzz@web ~]$

Execute below mv command to mv file only when its newer than destination,

[linuxbuzz@web ~]$ mv -u tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$

That's all from this article, we have covered all important and basic examples of mv command.

Hopefully above examples will help you to learn more about mv command. Write your feedback and suggestions to us.

[Aug 28, 2019] Echo Command in Linux with Examples

Notable quotes:
"... The -e parameter is used for the interpretation of backslashes ..."
"... The -n option is used for omitting trailing newline. ..."
Aug 28, 2019 | linoxide.com

The -e parameter is used for the interpretation of backslashes

... ... ...

To create a new line after each word in a string use the -e operator with the \n option as shown
$ echo -e "Linux \nis \nan \nopensource \noperating \nsystem"

... ... ...

Omit echoing trailing newline

The -n option is used for omitting trailing newline. This is shown in the example below

$ echo -n "Linux is an opensource operating system"

Sample Output

Linux is an opensource operating systemjames@buster:/$

[Aug 28, 2019] How to navigate Ansible documentation Enable Sysadmin

Aug 28, 2019 | www.redhat.com

We take our first glimpse at the Ansible documentation on the official website. While Ansible can be overwhelming with so many immediate options, let's break down what is presented to us here. Putting our attention on the page's main pane, we are given five offerings from Ansible. This pane is a central location, or one-stop-shop, to maneuver through the documentation for products like Ansible Tower, Ansible Galaxy, and Ansible Lint: Image

We can even dive into Ansible Network for specific module documentation that extends the power and ease of Ansible automation to network administrators. The focal point of the rest of this article will be around Ansible Project, to give us a great starting point into our automation journey:

Image

Once we click the Ansible Documentation tile under the Ansible Project section, the first action we should take is to ensure we are viewing the documentation's correct version. We can get our current version of Ansible from our control node's command line by running ansible --version . Armed with the version information provided by the output, we can select the matching version in the site's upper-left-hand corner using the drop-down menu, that by default says latest :

Image

[Aug 27, 2019] Bash Variables - Bash Reference Manual

Aug 27, 2019 | bash.cyberciti.biz
BASH_LINENO
An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME . ${BASH_LINENO[$i]} is the line number in the source file where ${FUNCNAME[$i]} was called. The corresponding source file name is ${BASH_SOURCE[$i]} . Use LINENO to obtain the current line number.

[Aug 27, 2019] linux - How to show line number when executing bash script - Stack Overflow

Aug 27, 2019 | stackoverflow.com

How to show line number when executing bash script Ask Question Asked 6 years, 1 month ago Active 1 year, 4 months ago Viewed 47k times 68 31


dspjm ,Jul 23, 2013 at 7:31

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e , so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem. Is there a method which can output the line number of the script before each line is executed? Or output the line number before the command exhibition generated by set -x ? Or any method which can deal with my script line location problem would be a great help. Thanks.

Suvarna Pattayil ,Jul 28, 2017 at 17:25

You mention that you're already using -x . The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.

You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).

For example, if your script reads:

$ cat script
foo=10
echo ${foo}
echo $((2 + 2))

Executing it thus would print line numbers:

$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4

http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:

export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'

Deqing ,Jul 23, 2013 at 8:16

In Bash, $LINENO contains the line number where the script currently executing.

If you need to know the line number where the function was called, try $BASH_LINENO . Note that this variable is an array.

For example:

#!/bin/bash       

function log() {
    echo "LINENO: ${LINENO}"
    echo "BASH_LINENO: ${BASH_LINENO[*]}"
}

function foo() {
    log "$@"
}

foo "$@"

See here for details of Bash variables.

Eliran Malka ,Apr 25, 2017 at 10:14

Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.

Even more powerful solution: Install bashdb the bash debugger and debug the script line by line

kklepper ,Apr 2, 2018 at 22:44

Workaround for shells without LINENO

In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.

Define a function

echo_line_no () {
    grep -n "$1" $0 |  sed "s/echo_line_no//" 
    # grep the line(s) containing input $1 with line numbers
    # replace the function name with nothing 
} # echo_line_no

Use it with quotes like

echo_line_no "this is a simple comment with a line number"

Output is

16   "this is a simple comment with a line number"

if the number of this line in the source file is 16.

This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO .

Anything more to add?

Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?

Want to know more? Read reflections on debugging

[Aug 27, 2019] Gogo - Create Shortcuts to Long and Complicated Paths in Linux

Looks like second rate utility. No new worthwhile ideas. Not recommended.
Aug 27, 2019 | www.tecmint.com
~/.config/gogo/gogo.conf file (which should be auto created if it doesn't exist) and has the following syntax.
# Comments are lines that start from '#' character.
default = ~/something
alias = /desired/path
alias2 = /desired/path with space
alias3 = "/this/also/works"
zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"

If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.

To display the current aliases, use the -l switch. From the following screenshot, you can see that default points to ~/home/tecmint which is user tecmint's home directory on the system.

$ gogo -l
List Gogo Aliases <img aria-describedby="caption-attachment-28848" src="https://www.tecmint.com/wp-content/uploads/2018/03/List-Gogo-Aliases.png" alt="List Gogo Aliases" width="664" height="150" />

List Gogo Aliases

Below is an example of running gogo without any arguments.

$ cd Documents/Phone-Backup/Linux-Docs/
$ gogo
$ pwd
Running Gogo Without Options <img aria-describedby="caption-attachment-28849" src="https://www.tecmint.com/wp-content/uploads/2018/03/Gogo-Listing.png" alt="Running Gogo Without Options" width="661" height="105" />

Running Gogo Without Options

To create a shortcut to a long path, move into the directory you want and use the -a flag to add an alias for that directory in gogo , as shown.

$ cd Documents/Phone-Backup/Linux-Docs/Ubuntu/
$ gogo -a Ubuntu
$ gogo
$ gogo -l
$ gogo -a Ubuntu
$ pwd
Create Long Directory Shortcut <img aria-describedby="caption-attachment-28850" src="https://www.tecmint.com/wp-content/uploads/2018/03/Create-Gogo-Shortcut.png" alt="Create Long Directory Shortcut " width="739" height="270" />

Create Long Directory Shortcut

You can also create aliases for connecting directly into directories on a remote Linux servers. To do this, simple add the following lines to gogo configuration file, which can be accessed using -e flag, this will use the editor specified in the $EDITOR env variable.

$ gogo -e

One configuration file opens, add these following lines to it.

sshroot = ssh://root@192.168.56.5:/bin/bash  /root/
sshtdocs = ssh://tecmint@server3  ~/tecmint/docs/
  1. sitaram says: August 25, 2019 at 7:46 am

    The bulk of what this tool does can be replaced with a shell function that does ` cd $(grep -w ^$1 ~/.config/gogo.conf | cut -f2 -d' ') `, where `$1` is the argument supplied to the function.

    If you've already installed fzf (and you really should), then you can get a far better experience than even zsh's excellent "completion" facilities. I use something like ` cd $(fzf -1 +m -q "$1" < ~/.cache/to) ` (My equivalent of gogo.conf is ` ~/.cache/to `).

[Aug 26, 2019] Linux and Unix exit code tutorial with examples by George Ornbo

Aug 07, 2016 | shapeshed.com
Tutorial on using exit codes from Linux or UNIX commands. Examples of how to get the exit code of a command, how to set the exit code and how to suppress exit codes.

Estimated reading time: 3 minutes

Table of contents

UNIX exit code

What is an exit code in the UNIX or Linux shell?

An exit code, or sometimes known as a return code, is the code returned to a parent process by an executable. On POSIX systems the standard exit code is 0 for success and any number from 1 to 255 for anything else.

Exit codes can be interpreted by machine scripts to adapt in the event of successes of failures. If exit codes are not set the exit code will be the exit code of the last run command.

How to get the exit code of a command

To get the exit code of a command type echo $? at the command prompt. In the following example a file is printed to the terminal using the cat command.

cat file.txt
hello world
echo $?
0

The command was successful. The file exists and there are no errors in reading the file or writing it to the terminal. The exit code is therefore 0 .

In the following example the file does not exist.

cat doesnotexist.txt
cat: doesnotexist.txt: No such file or directory
echo $?
1

The exit code is 1 as the operation was not successful.

How to use exit codes in scripts

To use exit codes in scripts an if statement can be used to see if an operation was successful.

#!/bin/bash

cat file.txt 

if [ $? -eq 0 ]
then
  echo "The script ran ok"
  exit 0
else
  echo "The script failed" >&2
  exit 1
fi

If the command was unsuccessful the exit code will be 0 and 'The script ran ok' will be printed to the terminal.

How to set an exit code

To set an exit code in a script use exit 0 where 0 is the number you want to return. In the following example a shell script exits with a 1 . This file is saved as exit.sh .

#!/bin/bash

exit 1

Executing this script shows that the exit code is correctly set.

bash exit.sh
echo $?
1
What exit code should I use?

The Linux Documentation Project has a list of reserved codes that also offers advice on what code to use for specific scenarios. These are the standard error codes in Linux or UNIX.

How to suppress exit statuses

Sometimes there may be a requirement to suppress an exit status. It may be that a command is being run within another script and that anything other than a 0 status is undesirable.

In the following example a file is printed to the terminal using cat . This file does not exist so will cause an exit status of 1 .

To suppress the error message any output to standard error is sent to /dev/null using 2>/dev/null .

If the cat command fails an OR operation can be used to provide a fallback - cat file.txt || exit 0 . In this case an exit code of 0 is returned even if there is an error.

Combining both the suppression of error output and the OR operation the following script returns a status code of 0 with no output even though the file does not exist.

#!/bin/bash

cat 'doesnotexist.txt' 2>/dev/null || exit 0
Further reading

[Aug 26, 2019] Exit Codes - Shell Scripting Tutorial

Aug 26, 2019 | www.shellscript.sh

Exit codes are a number between 0 and 255, which is returned by any Unix command when it returns control to its parent process.
Other numbers can be used, but these are treated modulo 256, so exit -10 is equivalent to exit 246 , and exit 257 is equivalent to exit 1 .

These can be used within a shell script to change the flow of execution depending on the success or failure of commands executed. This was briefly introduced in Variables - Part II . Here we shall look in more detail in the available interpretations of exit codes.

Success is traditionally represented with exit 0 ; failure is normally indicated with a non-zero exit-code. This value can indicate different reasons for failure.
For example, GNU grep returns 0 on success, 1 if no matches were found, and 2 for other errors (syntax errors, non-existent input files, etc).

We shall look at three different methods for checking error status, and discuss the pros and cons of each approach.

Firstly, the simple approach:


#!/bin/sh
# First attempt at checking return codes
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This script works fine if you supply a valid username in /etc/passwd . However, if you enter an invalid code, it does not do what you might at first expect - it keeps running, and just shows:
USERNAME: 
NAME: 
HOMEDIR:
Why is this? As mentioned, the $? variable is set to the return code of the last executed command . In this case, that is cut . cut had no problems which it feels like reporting - as far as I can tell from testing it, and reading the documentation, cut returns zero whatever happens! It was fed an empty string, and did its job - returned the first field of its input, which just happened to be the empty string.

So what do we do? If we have an error here, grep will report it, not cut . Therefore, we have to test grep 's return code, not cut 's.


#!/bin/sh
# Second attempt at checking return codes
grep "^${1}:" /etc/passwd > /dev/null 2>&1
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This fixes the problem for us, though at the expense of slightly longer code.
That is the basic way which textbooks might show you, but it is far from being all there is to know about error-checking in shell scripts. This method may not be the most suitable to your particular command-sequence, or may be unmaintainable. Below, we shall investigate two alternative approaches.

As a second approach, we can tidy this somewhat by putting the test into a separate function, instead of littering the code with lots of 4-line tests:


#!/bin/sh
# A Tidier approach

check_errs()
{
  # Function. Parameter 1 is the return code
  # Para. 2 is text to display on failure.
  if [ "${1}" -ne "0" ]; then
    echo "ERROR # ${1} : ${2}"
    # as a bonus, make our script exit with the right error code.
    exit ${1}
  fi
}

### main script starts here ###

grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

This allows us to test for errors 3 times, with customised error messages, without having to write 3 individual tests. By writing the test routine once. we can call it as many times as we wish, creating a more intelligent script, at very little expense to the programmer. Perl programmers will recognise this as being similar to the die command in Perl.

As a third approach, we shall look at a simpler and cruder method. I tend to use this for building Linux kernels - simple automations which, if they go well, should just get on with it, but when things go wrong, tend to require the operator to do something intelligent (ie, that which a script cannot do!):


#!/bin/sh
cd /usr/src/linux && \
make dep && make bzImage && make modules && make modules_install && \
cp arch/i386/boot/bzImage /boot/my-new-kernel && cp System.map /boot && \
echo "Your new kernel awaits, m'lord."
This script runs through the various tasks involved in building a Linux kernel (which can take quite a while), and uses the && operator to check for success. To do this with if would involve:
#!/bin/sh
cd /usr/src/linux
if [ "$?" -eq "0" ]; then
  make dep 
    if [ "$?" -eq "0" ]; then
      make bzImage 
      if [ "$?" -eq "0" ]; then
        make modules 
        if [ "$?" -eq "0" ]; then
          make modules_install
          if [ "$?" -eq "0" ]; then
            cp arch/i386/boot/bzImage /boot/my-new-kernel
            if [ "$?" -eq "0" ]; then
              cp System.map /boot/
              if [ "$?" -eq "0" ]; then
                echo "Your new kernel awaits, m'lord."
              fi
            fi
          fi
        fi
      fi
    fi
  fi
fi

... which I, personally, find pretty difficult to follow.


The && and || operators are the shell's equivalent of AND and OR tests. These can be thrown together as above, or:


#!/bin/sh
cp /foo /bar && echo Success || echo Failed

This code will either echo

Success

or

Failed

depending on whether or not the cp command was successful. Look carefully at this; the construct is

command && command-to-execute-on-success || command-to-execute-on-failure

Only one command can be in each part. This method is handy for simple success / fail scenarios, but if you want to check on the status of the echo commands themselves, it is easy to quickly become confused about which && and || applies to which command. It is also very difficult to maintain. Therefore this construct is only recommended for simple sequencing of commands.

In earlier versions, I had suggested that you can use a subshell to execute multiple commands depending on whether the cp command succeeded or failed:

cp /foo /bar && ( echo Success ; echo Success part II; ) || ( echo Failed ; echo Failed part II )

But in fact, Marcel found that this does not work properly. The syntax for a subshell is:

( command1 ; command2; command3 )

The return code of the subshell is the return code of the final command ( command3 in this example). That return code will affect the overall command. So the output of this script:

cp /foo /bar && ( echo Success ; echo Success part II; /bin/false ) || ( echo Failed ; echo Failed part II )

Is that it runs the Success part (because cp succeeded, and then - because /bin/false returns failure, it also executes the Failure part:

Success
Success part II
Failed
Failed part II

So if you need to execute multiple commands as a result of the status of some other condition, it is better (and much clearer) to use the standard if , then , else syntax.

[Aug 26, 2019] linux - Avoiding accidental 'rm' disasters - Super User

Aug 26, 2019 | superuser.com

Avoiding accidental 'rm' disasters Ask Question Asked 6 years, 3 months ago Active 6 years, 3 months ago Viewed 1k times 1

Mr_Spock ,May 26, 2013 at 11:30

Today, using sudo -s , I wanted to rm -R ./lib/ , but I actually rm -R /lib/ .

I had to reinstall my OS (Mint 15) and re-download and re-configure all my packages. Not fun.

How can I avoid similar mistakes in the future?

Vittorio Romeo ,May 26, 2013 at 11:55

First of all, stop executing everything as root . You never really need to do this. Only run individual commands with sudo if you need to. If a normal command doesn't work without sudo, just call sudo !! to execute it again.

If you're paranoid about rm , mv and other operations while running as root, you can add the following aliases to your shell's configuration file:

[ $UID = 0 ] && \
  alias rm='rm -i' && \
  alias mv='mv -i' && \
  alias cp='cp -i'

These will all prompt you for confirmation ( -i ) before removing a file or overwriting an existing file, respectively, but only if you're root (the user with ID 0).

Don't get too used to that though. If you ever find yourself working on a system that doesn't prompt you for everything, you might end up deleting stuff without noticing it. The best way to avoid mistakes is to never run as root and think about what exactly you're doing when you use sudo .

[Aug 26, 2019] bash - How to prevent rm from reporting that a file was not found

Aug 26, 2019 | stackoverflow.com

How to prevent rm from reporting that a file was not found? Ask Question Asked 7 years, 4 months ago Active 1 year, 4 months ago Viewed 101k times 133 19


pizza ,Apr 20, 2012 at 21:29

I am using rm within a BASH script to delete many files. Sometimes the files are not present, so it reports many errors. I do not need this message. I have searched the man page for a command to make rm quiet, but the only option I found is -f , which from the description, "ignore nonexistent files, never prompt", seems to be the right choice, but the name does not seem to fit, so I am concerned it might have unintended consequences.

Keith Thompson ,Dec 19, 2018 at 13:05

The main use of -f is to force the removal of files that would not be removed using rm by itself (as a special case, it "removes" non-existent files, thus suppressing the error message).

You can also just redirect the error message using

$ rm file.txt 2> /dev/null

(or your operating system's equivalent). You can check the value of $? immediately after calling rm to see if a file was actually removed or not.

vimdude ,May 28, 2014 at 18:10

Yes, -f is the most suitable option for this.

tripleee ,Jan 11 at 4:50

-f is the correct flag, but for the test operator, not rm
[ -f "$THEFILE" ] && rm "$THEFILE"

this ensures that the file exists and is a regular file (not a directory, device node etc...)

mahemoff ,Jan 11 at 4:41

\rm -f file will never report not found.

Idelic ,Apr 20, 2012 at 16:51

As far as rm -f doing "anything else", it does force ( -f is shorthand for --force ) silent removal in situations where rm would otherwise ask you for confirmation. For example, when trying to remove a file not writable by you from a directory that is writable by you.

Keith Thompson ,May 28, 2014 at 18:09

I had same issue for cshell. The only solution I had was to create a dummy file that matched pattern before "rm" in my script.

[Aug 26, 2019] shell - rm -rf return codes

Aug 26, 2019 | superuser.com

rm -rf return codes Ask Question Asked 6 years ago Active 6 years ago Viewed 15k times 8 0


SheetJS ,Aug 15, 2013 at 2:50

Any one can let me know the possible return codes for the command rm -rf other than zero i.e, possible return codes for failure cases. I want to know more detailed reason for the failure of the command unlike just the command is failed(return other than 0).

Adrian Frühwirth ,Aug 14, 2013 at 7:00

To see the return code, you can use echo $? in bash.

To see the actual meaning, some platforms (like Debian Linux) have the perror binary available, which can be used as follows:

$ rm -rf something/; perror $?
rm: cannot remove `something/': Permission denied
OS error code   1:  Operation not permitted

rm -rf automatically suppresses most errors. The most likely error you will see is 1 (Operation not permitted), which will happen if you don't have permissions to remove the file. -f intentionally suppresses most errors

Adrian Frühwirth ,Aug 14, 2013 at 7:21

grabbed coreutils from git....

looking at exit we see...

openfly@linux-host:~/coreutils/src $ cat rm.c | grep -i exit
  if (status != EXIT_SUCCESS)
  exit (status);
  /* Since this program exits immediately after calling 'rm', rm need not
  atexit (close_stdin);
          usage (EXIT_FAILURE);
        exit (EXIT_SUCCESS);
          usage (EXIT_FAILURE);
        error (EXIT_FAILURE, errno, _("failed to get attributes of %s"),
        exit (EXIT_SUCCESS);
  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);

Now looking at the status variable....

openfly@linux-host:~/coreutils/src $ cat rm.c | grep -i status
usage (int status)
  if (status != EXIT_SUCCESS)
  exit (status);
  enum RM_status status = rm (file, &x);
  assert (VALID_STATUS (status));
  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);

looks like there isn't much going on there with the exit status.

I see EXIT_FAILURE and EXIT_SUCCESS and not anything else.

so basically 0 and 1 / -1

To see specific exit() syscalls and how they occur in a process flow try this

openfly@linux-host:~/ $ strace rm -rf $whatever

fairly simple.

ref:

http://www.unix.com/man-page/Linux/EXIT_FAILURE/exit/

[Aug 22, 2019] How To Display Bash History Without Line Numbers - OSTechNix

Aug 22, 2019 | www.ostechnix.com

Method 2 – Using history command

We can use the history command's write option to print the history without numbers like below.

$ history -w /dev/stdout
Method 3 – Using history and cut commands

One such way is to use history and cut commands like below.

$ history | cut -c 8-

[Aug 22, 2019] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Aug 22, 2019 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.

Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.

Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

[Aug 20, 2019] Is it possible to insert separator in midnight commander menu?

Jun 07, 2010 | superuser.com

Ask Question Asked 9 years, 2 months ago Active 7 years, 10 months ago Viewed 363 times 2

okutane ,Jun 7, 2010 at 3:36

I want to insert some items into mc menu (which is opened by F2) grouped together. Is it possible to insert some sort of separator before them or put them into some submenu?
Probably, not.
The format of the menu file is very simple. Lines that start with anything but
space or tab are considered entries for the menu (in order to be able to use
it like a hot key, the first character should be a letter). All the lines that
start with a space or a tab are the commands that will be executed when the
entry is selected.

But MC allows you to make multiple menu entries with same shortcut and title, so you can make a menu entry that looks like separator and does nothing, like:

a hello
  echo world
- --------
b world
  echo hello
- --------
c superuser
  ls /

This will look like:

[Aug 20, 2019] Midnight Commander, using date in User menu

Dec 31, 2013 | unix.stackexchange.com

user2013619 ,Dec 31, 2013 at 0:43

I would like to use MC (midnight commander) to compress the selected dir with date in its name, e.g: dirname_20131231.tar.gz

The command in the User menu is :

tar -czf dirname_`date '+%Y%m%d'`.tar.gz %d

The archive is missing because %m , and %d has another meaning in MC. I made an alias for the date, but it also doesn't work.

Does anybody solved this problem ever?

John1024 ,Dec 31, 2013 at 1:06

To escape the percent signs, double them:
tar -czf dirname_$(date '+%%Y%%m%%d').tar.gz %d

The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory pointed to by the cursor rather than the current directory, use %f instead:

tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f

mc handles escaping of special characters so there is no need to put %f in quotes.

By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as in the user menu file, percent signs can be escaped by doubling them.

[Aug 20, 2019] How to exclude file when using scp command recursively

Aug 12, 2019 | www.cyberciti.biz

I need to copy all the *.c files from local laptop named hostA to hostB including all directories. I am using the following scp command but do not know how to exclude specific files (such as *.out): $ scp -r ~/projects/ user@hostB:/home/delta/projects/ How do I tell scp command to exclude particular file or directory at the Linux/Unix command line? One can use scp command to securely copy files between hosts on a network. It uses ssh for data transfer and authentication purpose. Typical scp command syntax is as follows: scp file1 user@host:/path/to/dest/ scp -r /path/to/source/ user@host:/path/to/dest/ scp [options] /dir/to/source/ user@host:/dir/to/dest/

Scp exclude files

I don't think so you can filter or exclude files when using scp command. However, there is a great workaround to exclude files and copy it securely using ssh. This page explains how to filter or excludes files when using scp to copy a directory recursively.

How to use rsync command to exclude files

The syntax is:

rsync -av -e ssh --exclude='*.out' /path/to/source/ user@hostB:/path/to/dest/

Where,

  1. -a : Recurse into directories i.e. copy all files and subdirectories. Also, turn on archive mode and all other options (-rlptgoD)
  2. -v : Verbose output
  3. -e ssh : Use ssh for remote shell so everything gets encrypted
  4. --exclude='*.out' : exclude files matching PATTERN e.g. *.out or *.c and so on.
Example of rsync command

In this example copy all file recursively from ~/virt/ directory but exclude all *.new files:
$ rsync -av -e ssh --exclude='*.new' ~/virt/ root@centos7:/tmp

[Aug 19, 2019] Moreutils - A Collection Of More Useful Unix Utilities - OSTechNix

Parallel is a really useful utility. RPM is installable from EPEL.
Aug 19, 2019 | www.ostechnix.com

... ... ...

On RHEL , CentOS , Scientific Linux :
$ sudo yum install epel-release
$ sudo yum install moreutils

[Aug 19, 2019] mc - Is there are any documentation about user-defined menu in midnight-commander - Unix Linux Stack Exchange

Aug 19, 2019 | unix.stackexchange.com

Is there are any documentation about user-defined menu in midnight-commander? Ask Question Asked 5 years, 2 months ago Active 1 year, 2 months ago Viewed 3k times 6 2


login ,Jun 11, 2014 at 13:13

I'd like to create my own user-defined menu for mc ( menu file). I see some lines like
+ t r & ! t t

or

+ t t

What does it mean?

goldilocks ,Jun 11, 2014 at 13:35

It is documented in the help, the node is "Edit Menu File" under "Command Menu"; if you scroll down you should find "Addition Conditions":

If the condition begins with '+' (or '+?') instead of '=' (or '=?') it is an addition condition. If the condition is true the menu entry will be included in the menu. If the condition is false the menu entry will not be included in the menu.

This is preceded by "Default conditions" (the = condition), which determine which entry will be highlighted as the default choice when the menu appears. Anyway, by way of example:

+ t r & ! t t

t r means if this is a regular file ("t(ype) r"), and ! t t means if the file has not been tagged in the interface.

Jarek

On top what has been written above, this page can be browsed in the Internet, when searching for man pages, e.g.: https://www.systutorials.com/docs/linux/man/1-mc/

Search for "Menu File Edit" .

Best regards, Jarek

[Aug 14, 2019] bash - PID background process - Unix Linux Stack Exchange

Aug 14, 2019 | unix.stackexchange.com

PID background process Ask Question Asked 2 years, 8 months ago Active 2 years, 8 months ago Viewed 2k times 2


Raul ,Nov 27, 2016 at 18:21

As I understand pipes and commands, bash takes each command, spawns a process for each one and connects stdout of the previous one with the stdin of the next one.

For example, in "ls -lsa | grep feb", bash will create two processes, and connect the output of "ls -lsa" to the input of "grep feb".

When you execute a background command like "sleep 30 &" in bash, you get the pid of the background process running your command. Surprisingly for me, when I wrote "ls -lsa | grep feb &" bash returned only one PID.

How should this be interpreted? A process runs both "ls -lsa" and "grep feb"? Several process are created but I only get the pid of one of them?

Raul ,Nov 27, 2016 at 19:21

Spawns 2 processes. The & displays the PID of the second process. Example below.
$ echo $$
13358
$ sleep 100 | sleep 200 &
[1] 13405
$ ps -ef|grep 13358
ec2-user 13358 13357  0 19:02 pts/0    00:00:00 -bash
ec2-user 13404 13358  0 19:04 pts/0    00:00:00 sleep 100
ec2-user 13405 13358  0 19:04 pts/0    00:00:00 sleep 200
ec2-user 13406 13358  0 19:04 pts/0    00:00:00 ps -ef
ec2-user 13407 13358  0 19:04 pts/0    00:00:00 grep --color=auto 13358
$

> ,

When you run a job in the background, bash prints the process ID of its subprocess, the one that runs the command in that job. If that job happens to create more subprocesses, that's none of the parent shell's business.

When the background job is a pipeline (i.e. the command is of the form something1 | something2 & , and not e.g. { something1 | something2; } & ), there's an optimization which is strongly suggested by POSIX and performed by most shells including bash: each of the elements of the pipeline are executed directly as subprocesses of the original shell. What POSIX mandates is that the variable $! is set to the last command in the pipeline in this case. In most shells, that last command is a subprocess of the original process, and so are the other commands in the pipeline.

When you run ls -lsa | grep feb , there are three processes involved: the one that runs the left-hand side of the pipe (a subshell that finishes setting up the pipe then executes ls ), the one that runs the right-hand side of the pipe (a subshell that finishes setting up the pipe then executes grep ), and the original process that waits for the pipe to finish.

You can watch what happens by tracing the processes:

$ strace -f -e clone,wait4,pipe,execve,setpgid bash --norc
execve("/usr/local/bin/bash", ["bash", "--norc"], [/* 82 vars */]) = 0
setpgid(0, 24084)                       = 0
bash-4.3$ sleep 10 | sleep 20 &

Note how the second sleep is reported and stored as $! , but the process group ID is the first sleep . Dash has the same oddity, ksh and mksh don't.

[Aug 14, 2019] unix - How to get PID of process by specifying process name and store it in a variable to use further - Stack Overflow

Aug 14, 2019 | stackoverflow.com

Nidhi ,Nov 28, 2014 at 0:54

pids=$(pgrep <name>)

will get you the pids of all processes with the given name. To kill them all, use

kill -9 $pids

To refrain from using a variable and directly kill all processes with a given name issue

pkill -9 <name>

panticz.de ,Nov 11, 2016 at 10:11

On a single line...
pgrep -f process_name | xargs kill -9

flazzarini ,Jun 13, 2014 at 9:54

Another possibility would be to use pidof it usually comes with most distributions. It will return you the PID of a given process by using it's name.
pidof process_name

This way you could store that information in a variable and execute kill -9 on it.

#!/bin/bash
pid=`pidof process_name`
kill -9 $pid

Pawel K ,Dec 20, 2017 at 10:27

use grep [n]ame to remove that grep -v name this is first... Sec using xargs in the way how it is up there is wrong to rnu whatever it is piped you have to use -i ( interactive mode) otherwise you may have issues with the command.

ps axf | grep | grep -v grep | awk '{print "kill -9 " $1}' ? ps aux |grep [n]ame | awk '{print "kill -9 " $2}' ? isnt that better ?

[Aug 14, 2019] linux - How to get PID of background process - Stack Overflow

Highly recommended!
Aug 14, 2019 | stackoverflow.com

How to get PID of background process? Ask Question Asked 9 years, 8 months ago Active 7 months ago Viewed 238k times 336 64


pixelbeat ,Mar 20, 2013 at 9:11

I start a background process from my shell script, and I would like to kill this process when my script finishes.

How to get the PID of this process from my shell script? As far as I can see variable $! contains the PID of the current script, not the background process.

WiSaGaN ,Jun 2, 2015 at 14:40

You need to save the PID of the background process at the time you start it:
foo &
FOO_PID=$!
# do other stuff
kill $FOO_PID

You cannot use job control, since that is an interactive feature and tied to a controlling terminal. A script will not necessarily have a terminal attached at all so job control will not necessarily be available.

Phil ,Dec 2, 2017 at 8:01

You can use the jobs -l command to get to a particular jobL
^Z
[1]+  Stopped                 guard

my_mac:workspace r$ jobs -l
[1]+ 46841 Suspended: 18           guard

In this case, 46841 is the PID.

From help jobs :

-l Report the process group ID and working directory of the jobs.

jobs -p is another option which shows just the PIDs.

Timo ,Dec 2, 2017 at 8:03

Here's a sample transcript from a bash session ( %1 refers to the ordinal number of background process as seen from jobs ):

$ echo $$
3748

$ sleep 100 &
[1] 192

$ echo $!
192

$ kill %1

[1]+  Terminated              sleep 100

lepe ,Dec 2, 2017 at 8:29

An even simpler way to kill all child process of a bash script:
pkill -P $$

The -P flag works the same way with pkill and pgrep - it gets child processes, only with pkill the child processes get killed and with pgrep child PIDs are printed to stdout.

Luis Ramirez ,Feb 20, 2013 at 23:11

this is what I have done. Check it out, hope it can help.
#!/bin/bash
#
# So something to show.
echo "UNO" >  UNO.txt
echo "DOS" >  DOS.txt
#
# Initialize Pid List
dPidLst=""
#
# Generate background processes
tail -f UNO.txt&
dPidLst="$dPidLst $!"
tail -f DOS.txt&
dPidLst="$dPidLst $!"
#
# Report process IDs
echo PID=$$
echo dPidLst=$dPidLst
#
# Show process on current shell
ps -f
#
# Start killing background processes from list
for dPid in $dPidLst
do
        echo killing $dPid. Process is still there.
        ps | grep $dPid
        kill $dPid
        ps | grep $dPid
        echo Just ran "'"ps"'" command, $dPid must not show again.
done

Then just run it as: ./bgkill.sh with proper permissions of course

root@umsstd22 [P]:~# ./bgkill.sh
PID=23757
dPidLst= 23758 23759
UNO
DOS
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     23757  3937  0 11:55 pts/5    00:00:00 /bin/bash ./bgkill.sh
root     23758 23757  0 11:55 pts/5    00:00:00 tail -f UNO.txt
root     23759 23757  0 11:55 pts/5    00:00:00 tail -f DOS.txt
root     23760 23757  0 11:55 pts/5    00:00:00 ps -f
killing 23758. Process is still there.
23758 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23758 Terminated              tail -f UNO.txt
Just ran 'ps' command, 23758 must not show again.
killing 23759. Process is still there.
23759 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23759 Terminated              tail -f DOS.txt
Just ran 'ps' command, 23759 must not show again.
root@umsstd22 [P]:~# ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     24200  3937  0 11:56 pts/5    00:00:00 ps -f

Phil ,Oct 15, 2013 at 18:22

You might also be able to use pstree:
pstree -p user

This typically gives a text representation of all the processes for the "user" and the -p option gives the process-id. It does not depend, as far as I understand, on having the processes be owned by the current shell. It also shows forks.

Phil ,Dec 4, 2018 at 9:46

pgrep can get you all of the child PIDs of a parent process. As mentioned earlier $$ is the current scripts PID. So, if you want a script that cleans up after itself, this should do the trick:
trap 'kill $( pgrep -P $$ | tr "\n" " " )' SIGINT SIGTERM EXIT

[Aug 10, 2019] Midnight Commander (mc) convenient hard links creation from user menu "

Notable quotes:
"... You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs. ..."
"... he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! ..."
Dec 03, 2015 | bogdan.org.ua

Midnight Commander (mc): convenient hard links creation from user menu

3rd December 2015

Midnight Commander is a convenient two-panel file manager with tons of features.

You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs.

While for C-x s you get 2 pre-populated fields (path to the existing file, and path to the link – which is pre-populated with your opposite file panel path plus the name of the file under cursor; simply try it to see what I mean), for C-x l you only get 1 empty field: path of the hard link to create for a file under cursor. Symlink's behaviour would be much more convenient

Fortunately, a good man called Wiseman1024 created a feature request in the MC's bug tracker 6 years ago. Not only had he done so, but he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! You can select multiple files, then F2 l (lower-case L), and hard-links to your selected files (or a file under cursor) will be created in the opposite file panel. Great, thank you Wiseman1024 !

Word of warning: you must know what hard links are and what their limitations are before using this menu script. You also must check and understand the user menu code before adding it to your mc (by F9 C m u , and then pasting the script from the file).

Word of hope: 4 years ago Wiseman's feature request was assigned to Future Releases version, so a more convenient C-x l will (sooner or later) become the part of mc. Hopefully

[Aug 10, 2019] How to check the file size in Linux-Unix bash shell scripting by Vivek Gite

Aug 10, 2019 | www.cyberciti.biz

The stat command shows information about the file. The syntax is as follows to get the file size on GNU/Linux stat:

stat -c %s "/etc/passwd"

OR

stat --format=%s "/etc/passwd"

[Aug 10, 2019] bash - How to check size of a file - Stack Overflow

Aug 10, 2019 | stackoverflow.com

[ -n file.txt ] doesn't check its size , it checks that the string file.txt is non-zero length, so it will always succeed.

If you want to say " size is non-zero", you need [ -s file.txt ] .

To get a file's size , you can use wc -c to get the size ( file length) in bytes:

file=file.txt
minimumsize=90000
actualsize=$(wc -c <"$file")
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize bytes
else
    echo size is under $minimumsize bytes
fi

In this case, it sounds like that's what you want.

But FYI, if you want to know how much disk space the file is using, you could use du -k to get the size (disk space used) in kilobytes:

file=file.txt
minimumsize=90
actualsize=$(du -k "$file" | cut -f 1)
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize kilobytes
else
    echo size is under $minimumsize kilobytes
fi

If you need more control over the output format, you can also look at stat . On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

--Mikel

On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

Oz Solomon ,Jun 13, 2014 at 21:44

It surprises me that no one mentioned stat to check file size. Some methods are definitely better: using -s to find out whether the file is empty or not is easier than anything else if that's all you want. And if you want to find files of a size, then find is certainly the way to go.

I also like du a lot to get file size in kb, but, for bytes, I'd use stat :

size=$(stat -f%z $filename) # BSD stat

size=$(stat -c%s $filename) # GNU stat?
alternative solution with awk and double parenthesis:
FILENAME=file.txt
SIZE=$(du -sb $FILENAME | awk '{ print $1 }')

if ((SIZE<90000)) ; then 
    echo "less"; 
else 
    echo "not less"; 
fi

[Aug 07, 2019] Find files and tar them (with spaces)

Aug 07, 2019 | stackoverflow.com

Ask Question Asked 8 years, 3 months ago Active 1 month ago Viewed 104k times 106 45


porges ,Sep 6, 2012 at 17:43

Alright, so simple problem here. I'm working on a simple back up code. It works fine except if the files have spaces in them. This is how I'm finding files and adding them to a tar archive:
find . -type f | xargs tar -czvf backup.tar.gz

The problem is when the file has a space in the name because tar thinks that it's a folder. Basically is there a way I can add quotes around the results from find? Or a different way to fix this?

Brad Parks ,Mar 2, 2017 at 18:35

Use this:
find . -type f -print0 | tar -czvf backup.tar.gz --null -T -

It will:

Also see:

czubehead ,Mar 19, 2018 at 11:51

There could be another way to achieve what you want. Basically,
  1. Use the find command to output path to whatever files you're looking for. Redirect stdout to a filename of your choosing.
  2. Then tar with the -T option which allows it to take a list of file locations (the one you just created with find!)
    find . -name "*.whatever" > yourListOfFiles
    tar -cvf yourfile.tar -T yourListOfFiles
    

gsteff ,May 5, 2011 at 2:05

Try running:
    find . -type f | xargs -d "\n" tar -czvf backup.tar.gz

Caleb Kester ,Oct 12, 2013 at 20:41

Why not:
tar czvf backup.tar.gz *

Sure it's clever to use find and then xargs, but you're doing it the hard way.

Update: Porges has commented with a find-option that I think is a better answer than my answer, or the other one: find -print0 ... | xargs -0 ....

Kalibur x ,May 19, 2016 at 13:54

If you have multiple files or directories and you want to zip them into independent *.gz file you can do this. Optional -type f -atime
find -name "httpd-log*.txt" -type f -mtime +1 -exec tar -vzcf {}.gz {} \;

This will compress

httpd-log01.txt
httpd-log02.txt

to

httpd-log01.txt.gz
httpd-log02.txt.gz

Frank Eggink ,Apr 26, 2017 at 8:28

Why not give something like this a try: tar cvf scala.tar `find src -name *.scala`

tommy.carstensen ,Dec 10, 2017 at 14:55

Another solution as seen here :
find var/log/ -iname "anaconda.*" -exec tar -cvzf file.tar.gz {} +

Robino ,Sep 22, 2016 at 14:26

The best solution seem to be to create a file list and then archive files because you can use other sources and do something else with the list.

For example this allows using the list to calculate size of the files being archived:

#!/bin/sh

backupFileName="backup-big-$(date +"%Y%m%d-%H%M")"
backupRoot="/var/www"
backupOutPath=""

archivePath=$backupOutPath$backupFileName.tar.gz
listOfFilesPath=$backupOutPath$backupFileName.filelist

#
# Make a list of files/directories to archive
#
echo "" > $listOfFilesPath
echo "${backupRoot}/uploads" >> $listOfFilesPath
echo "${backupRoot}/extra/user/data" >> $listOfFilesPath
find "${backupRoot}/drupal_root/sites/" -name "files" -type d >> $listOfFilesPath

#
# Size calculation
#
sizeForProgress=`
cat $listOfFilesPath | while read nextFile;do
    if [ ! -z "$nextFile" ]; then
        du -sb "$nextFile"
    fi
done | awk '{size+=$1} END {print size}'
`

#
# Archive with progress
#
## simple with dump of all files currently archived
#tar -czvf $archivePath -T $listOfFilesPath
## progress bar
sizeForShow=$(($sizeForProgress/1024/1024))
echo -e "\nRunning backup [source files are $sizeForShow MiB]\n"
tar -cPp -T $listOfFilesPath | pv -s $sizeForProgress | gzip > $archivePath

user3472383 ,Jun 27 at 1:11

Would add a comment to @Steve Kehlet post but need 50 rep (RIP).

For anyone that has found this post through numerous googling, I found a way to not only find specific files given a time range, but also NOT include the relative paths OR whitespaces that would cause tarring errors. (THANK YOU SO MUCH STEVE.)

find . -name "*.pdf" -type f -mtime 0 -printf "%f\0" | tar -czvf /dir/zip.tar.gz --null -T -
  1. . relative directory
  2. -name "*.pdf" look for pdfs (or any file type)
  3. -type f type to look for is a file
  4. -mtime 0 look for files created in last 24 hours
  5. -printf "%f\0" Regular -print0 OR -printf "%f" did NOT work for me. From man pages:

This quoting is performed in the same way as for GNU ls. This is not the same quoting mechanism as the one used for -ls and -fls. If you are able to decide what format to use for the output of find then it is normally better to use '\0' as a terminator than to use newline, as file names can contain white space and newline characters.

  1. -czvf create archive, filter the archive through gzip , verbosely list files processed, archive name

[Aug 06, 2019] Tar archiving that takes input from a list of files>

Aug 06, 2019 | stackoverflow.com

Tar archiving that takes input from a list of files Ask Question Asked 7 years, 9 months ago Active 6 months ago Viewed 123k times 131 29


Kurt McKee ,Apr 29 at 10:22

I have a file that contain list of files I want to archive with tar. Let's call it mylist.txt

It contains:

file1.txt
file2.txt
...
file10.txt

Is there a way I can issue TAR command that takes mylist.txt as input? Something like

tar -cvf allfiles.tar -[someoption?] mylist.txt

So that it is similar as if I issue this command:

tar -cvf allfiles.tar file1.txt file2.txt file10.txt

Stphane ,May 25 at 0:11

Yes:
tar -cvf allfiles.tar -T mylist.txt

drue ,Jun 23, 2014 at 14:56

Assuming GNU tar (as this is Linux), the -T or --files-from option is what you want.

Stphane ,Mar 1, 2016 at 20:28

You can also pipe in the file names which might be useful:
find /path/to/files -name \*.txt | tar -cvf allfiles.tar -T -

David C. Rankin ,May 31, 2018 at 18:27

Some versions of tar, for example, the default versions on HP-UX (I tested 11.11 and 11.31), do not include a command line option to specify a file list, so a decent work-around is to do this:
tar cvf allfiles.tar $(cat mylist.txt)

Jan ,Sep 25, 2015 at 20:18

On Solaris, you can use the option -I to read the filenames that you would normally state on the command line from a file. In contrast to the command line, this can create tar archives with hundreds of thousands of files (just did that).

So the example would read

tar -cvf allfiles.tar -I mylist.txt

,

For me on AIX, it worked as follows:
tar -L List.txt -cvf BKP.tar

[Aug 06, 2019] Shell command to tar directory excluding certain files-folders

Aug 06, 2019 | stackoverflow.com

Shell command to tar directory excluding certain files/folders Ask Question Asked 10 years, 1 month ago Active 1 month ago Viewed 787k times 720 186


Rekhyt ,Jun 24, 2014 at 16:06

Is there a simple shell command/script that supports excluding certain files/folders from being archived?

I have a directory that need to be archived with a sub directory that has a number of very large files I do not need to backup.

Not quite solutions:

The tar --exclude=PATTERN command matches the given pattern and excludes those files, but I need specific files & folders to be ignored (full file path), otherwise valid files might be excluded.

I could also use the find command to create a list of files and exclude the ones I don't want to archive and pass the list to tar, but that only works with for a small amount of files. I have tens of thousands.

I'm beginning to think the only solution is to create a file with a list of files/folders to be excluded, then use rsync with --exclude-from=file to copy all the files to a tmp directory, and then use tar to archive that directory.

Can anybody think of a better/more efficient solution?

EDIT: Charles Ma 's solution works well. The big gotcha is that the --exclude='./folder' MUST be at the beginning of the tar command. Full command (cd first, so backup is relative to that directory):

cd /folder_to_backup
tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

James O'Brien ,Nov 24, 2016 at 9:55

You can have multiple exclude options for tar so
$ tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

etc will work. Make sure to put --exclude before the source and destination items.

Johan Soderberg ,Jun 11, 2009 at 23:10

You can exclude directories with --exclude for tar.

If you want to archive everything except /usr you can use:

tar -zcvf /all.tgz / --exclude=/usr

In your case perhaps something like

tar -zcvf archive.tgz arc_dir --exclude=dir/ignore_this_dir

cstamas ,Oct 8, 2018 at 18:02

Possible options to exclude files/directories from backup using tar:

Exclude files using multiple patterns

tar -czf backup.tar.gz --exclude=PATTERN1 --exclude=PATTERN2 ... /path/to/backup

Exclude files using an exclude file filled with a list of patterns

tar -czf backup.tar.gz -X /path/to/exclude.txt /path/to/backup

Exclude files using tags by placing a tag file in any directory that should be skipped

tar -czf backup.tar.gz --exclude-tag-all=exclude.tag /path/to/backup

Anish Ramaswamy ,Apr 1 at 16:18

old question with many answers, but I found that none were quite clear enough for me, so I would like to add my try.

if you have the following structure

/home/ftp/mysite/

with following file/folders

/home/ftp/mysite/file1
/home/ftp/mysite/file2
/home/ftp/mysite/file3
/home/ftp/mysite/folder1
/home/ftp/mysite/folder2
/home/ftp/mysite/folder3

so, you want to make a tar file that contain everyting inside /home/ftp/mysite (to move the site to a new server), but file3 is just junk, and everything in folder3 is also not needed, so we will skip those two.

we use the format

tar -czvf <name of tar file> <what to tar> <any excludes>

where the c = create, z = zip, and v = verbose (you can see the files as they are entered, usefull to make sure none of the files you exclude are being added). and f= file.

so, my command would look like this

cd /home/ftp/
tar -czvf mysite.tar.gz mysite --exclude='file3' --exclude='folder3'

note the files/folders excluded are relatively to the root of your tar (I have tried full path here relative to / but I can not make that work).

hope this will help someone (and me next time I google it)

not2qubit ,Apr 4, 2018 at 3:24

You can use standard "ant notation" to exclude directories relative.
This works for me and excludes any .git or node_module directories.
tar -cvf myFile.tar --exclude=**/.git/* --exclude=**/node_modules/*  -T /data/txt/myInputFile.txt 2> /data/txt/myTarLogFile.txt

myInputFile.txt Contains:

/dev2/java
/dev2/javascript

GeertVc ,Feb 9, 2015 at 13:37

I've experienced that, at least with the Cygwin version of tar I'm using ("CYGWIN_NT-5.1 1.7.17(0.262/5/3) 2012-10-19 14:39 i686 Cygwin" on a Windows XP Home Edition SP3 machine), the order of options is important.

While this construction worked for me:

tar cfvz target.tgz --exclude='<dir1>' --exclude='<dir2>' target_dir

that one didn't work:

tar cfvz --exclude='<dir1>' --exclude='<dir2>' target.tgz target_dir

This, while tar --help reveals the following:

tar [OPTION...] [FILE]

So, the second command should also work, but apparently it doesn't seem to be the case...

Best rgds,

Scott Stensland ,Feb 12, 2015 at 20:55

This exclude pattern handles filename suffix like png or mp3 as well as directory names like .git and node_modules
tar --exclude={*.png,*.mp3,*.wav,.git,node_modules} -Jcf ${target_tarball}  ${source_dirname}

Michael ,May 18 at 23:29

I found this somewhere else so I won't take credit, but it worked better than any of the solutions above for my mac specific issues (even though this is closed):
tar zc --exclude __MACOSX --exclude .DS_Store -f <archive> <source(s)>

J. Lawson ,Apr 17, 2018 at 23:28

For those who have issues with it, some versions of tar would only work properly without the './' in the exclude value.
Tar --version

tar (GNU tar) 1.27.1

Command syntax that work:

tar -czvf ../allfiles-butsome.tar.gz * --exclude=acme/foo

These will not work:

$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=./acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='./acme/foo'
$ tar --exclude=./acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='./acme/foo' -czvf ../allfiles-butsome.tar.gz *
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=/full/path/acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='/full/path/acme/foo'
$ tar --exclude=/full/path/acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='/full/path/acme/foo' -czvf ../allfiles-butsome.tar.gz *

Jerinaw ,May 6, 2017 at 20:07

For Mac OSX I had to do

tar -zcv --exclude='folder' -f theOutputTarFile.tar folderToTar

Note the -f after the --exclude=

Aaron Votre ,Jul 15, 2016 at 15:56

I agree the --exclude flag is the right approach.
$ tar --exclude='./folder_or_file' --exclude='file_pattern' --exclude='fileA'

A word of warning for a side effect that I did not find immediately obvious: The exclusion of 'fileA' in this example will search for 'fileA' RECURSIVELY!

Example:A directory with a single subdirectory containing a file of the same name (data.txt)

data.txt
config.txt
--+dirA
  |  data.txt
  |  config.docx

Znik ,Nov 15, 2014 at 5:12

To avoid possible 'xargs: Argument list too long' errors due to the use of find ... | xargs ... when processing tens of thousands of files, you can pipe the output of find directly to tar using find ... -print0 | tar --null ... .
# archive a given directory, but exclude various files & directories 
# specified by their full file paths
find "$(pwd -P)" -type d \( -path '/path/to/dir1' -or -path '/path/to/dir2' \) -prune \
   -or -not \( -path '/path/to/file1' -or -path '/path/to/file2' \) -print0 | 
   gnutar --null --no-recursion -czf archive.tar.gz --files-from -
   #bsdtar --null -n -czf archive.tar.gz -T -

Mike ,May 9, 2014 at 21:29

After reading this thread, I did a little testing on RHEL 5 and here are my results for tarring up the abc directory:

This will exclude the directories error and logs and all files under the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error' --exclude='abc/logs'

Adding a wildcard after the excluded directory will exclude the files but preserve the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error/*' --exclude='abc/logs/*'

Alex B ,Jun 11, 2009 at 23:03

Use the find command in conjunction with the tar append (-r) option. This way you can add files to an existing tar in a single step, instead of a two pass solution (create list of files, create tar).
find /dir/dir -prune ... -o etc etc.... -exec tar rvf ~/tarfile.tar {} \;

frommelmak ,Sep 10, 2012 at 14:08

You can also use one of the "--exclude-tag" options depending on your needs:

The folder hosting the specified FILE will be excluded.

camh ,Jun 12, 2009 at 5:53

You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file:
find ... | cpio -o -H ustar | gzip -c > archive.tar.gz

PicoutputCls ,Aug 21, 2018 at 14:13

gnu tar v 1.26 the --exclude needs to come after archive file and backup directory arguments, should have no leading or trailing slashes, and prefers no quotes (single or double). So relative to the PARENT directory to be backed up, it's:

tar cvfz /path_to/mytar.tgz ./dir_to_backup --exclude=some_path/to_exclude

user2553863 ,May 28 at 21:41

After reading all this good answers for different versions and having solved the problem for myself, I think there are very small details that are very important, and rare to GNU/Linux general use , that aren't stressed enough and deserves more than comments.

So I'm not going to try to answer the question for every case, but instead, try to register where to look when things doesn't work.

IT IS VERY IMPORTANT TO NOTICE:

  1. THE ORDER OF THE OPTIONS MATTER: it is not the same put the --exclude before than after the file option and directories to backup. This is unexpected at least to me, because in my experience, in GNU/Linux commands, usually the order of the options doesn't matter.
  2. Different tar versions expects this options in different order: for instance, @Andrew's answer indicates that in GNU tar v 1.26 and 1.28 the excludes comes last, whereas in my case, with GNU tar 1.29, it's the other way.
  3. THE TRAILING SLASHES MATTER : at least in GNU tar 1.29, it shouldn't be any .

In my case, for GNU tar 1.29 on Debian stretch, the command that worked was

tar --exclude="/home/user/.config/chromium" --exclude="/home/user/.cache" -cf file.tar  /dir1/ /home/ /dir3/

The quotes didn't matter, it worked with or without them.

I hope this will be useful to someone.

jørgensen ,Dec 19, 2015 at 11:10

Your best bet is to use find with tar, via xargs (to handle the large number of arguments). For example:
find / -print0 | xargs -0 tar cjf tarfile.tar.bz2

Ashwini Gupta ,Jan 12, 2018 at 10:30

tar -cvzf destination_folder source_folder -X /home/folder/excludes.txt

-X indicates a file which contains a list of filenames which must be excluded from the backup. For Instance, you can specify *~ in this file to not include any filenames ending with ~ in the backup.

George ,Sep 4, 2013 at 22:35

Possible redundant answer but since I found it useful, here it is:

While a FreeBSD root (i.e. using csh) I wanted to copy my whole root filesystem to /mnt but without /usr and (obviously) /mnt. This is what worked (I am at /):

tar --exclude ./usr --exclude ./mnt --create --file - . (cd /mnt && tar xvd -)

My whole point is that it was necessary (by putting the ./ ) to specify to tar that the excluded directories where part of the greater directory being copied.

My €0.02

t0r0X ,Sep 29, 2014 at 20:25

I had no luck getting tar to exclude a 5 Gigabyte subdirectory a few levels deep. In the end, I just used the unix Zip command. It worked a lot easier for me.

So for this particular example from the original post
(tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz . )

The equivalent would be:

zip -r /backup/filename.zip . -x upload/folder/**\* upload/folder2/**\*

(NOTE: Here is the post I originally used that helped me https://superuser.com/questions/312301/unix-zip-directory-but-excluded-specific-subdirectories-and-everything-within-t )

RohitPorwal ,Jul 21, 2016 at 9:56

Check it out
tar cvpzf zip_folder.tgz . --exclude=./public --exclude=./tmp --exclude=./log --exclude=fileName

tripleee ,Sep 14, 2017 at 4:38

The following bash script should do the trick. It uses the answer given here by Marcus Sundman.
#!/bin/bash

echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam

echo -n "Please enter the path to the directories to tar "
read pathin

echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin

echo tar -czvf $nam.tar.gz $excludes $pathin

This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.

Just change *.CC for any other common extension, file name or regex you want to exclude and this should still work.

EDIT

Just to add a little explanation; find generates a list of files matching the chosen regex (in this case *.CC). This list is passed via xargs to the echo command. This prints --exclude 'one entry from the list'. The slashes () are escape characters for the ' marks.

[Aug 06, 2019] bash - More efficient way to find tar millions of files - Stack Overflow

Aug 06, 2019 | stackoverflow.com

More efficient way to find & tar millions of files Ask Question Asked 9 years, 3 months ago Active 8 months ago Viewed 25k times 22 13


theomega ,Apr 29, 2010 at 13:51

I've got a job running on my server at the command line prompt for a two days now:
find data/ -name filepattern-*2009* -exec tar uf 2009.tar {} ;

It is taking forever , and then some. Yes, there are millions of files in the target directory. (Each file is a measly 8 bytes in a well hashed directory structure.) But just running...

find data/ -name filepattern-*2009* -print > filesOfInterest.txt

...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks .. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script?

A secondary questions is "why is my current approach so slow?"

Stu Thompson ,May 6, 2013 at 1:11

If you already did the second command that created the file list, just use the -T option to tell tar to read the files names from that saved file list. Running 1 tar command vs N tar commands will be a lot better.

Matthew Mott ,Jul 3, 2014 at 19:21

One option is to use cpio to generate a tar-format archive:
$ find data/ -name "filepattern-*2009*" | cpio -ov --format=ustar > 2009.tar

cpio works natively with a list of filenames from stdin, rather than a top-level directory, which makes it an ideal tool for this situation.

bashfu ,Apr 23, 2010 at 10:05

Here's a find-tar combination that can do what you want without the use of xargs or exec (which should result in a noticeable speed-up):
tar --version    # tar (GNU tar) 1.14 

# FreeBSD find (on Mac OS X)
find -x data -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# for GNU find use -xdev instead of -x
gfind data -xdev -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# added: set permissions via tar
find -x data -name "filepattern-*2009*" -print0 | \
    tar --null --no-recursion --owner=... --group=... --mode=... -uf 2009.tar --files-from -

Stu Thompson ,Apr 28, 2010 at 12:50

There is xargs for this:
find data/ -name filepattern-*2009* -print0 | xargs -0 tar uf 2009.tar

Guessing why it is slow is hard as there is not much information. What is the structure of the directory, what filesystem do you use, how it was configured on creating. Having milions of files in single directory is quite hard situation for most filesystems.

bashfu ,May 1, 2010 at 14:18

To correctly handle file names with weird (but legal) characters (such as newlines, ...) you should write your file list to filesOfInterest.txt using find's -print0:
find -x data -name "filepattern-*2009*" -print0 > filesOfInterest.txt
tar --null --no-recursion -uf 2009.tar --files-from filesOfInterest.txt

Michael Aaron Safyan ,Apr 23, 2010 at 8:47

The way you currently have things, you are invoking the tar command every single time it finds a file, which is not surprisingly slow. Instead of taking the two hours to print plus the amount of time it takes to open the tar archive, see if the files are out of date, and add them to the archive, you are actually multiplying those times together. You might have better success invoking the tar command once, after you have batched together all the names, possibly using xargs to achieve the invocation. By the way, I hope you are using 'filepattern-*2009*' and not filepattern-*2009* as the stars will be expanded by the shell without quotes.

ruffrey ,Nov 20, 2018 at 17:13

There is a utility for this called tarsplitter .
tarsplitter -m archive -i folder/*.json -o archive.tar -p 8

will use 8 threads to archive the files matching "folder/*.json" into an output archive of "archive.tar"

https://github.com/AQUAOSOTech/tarsplitter

syneticon-dj ,Jul 22, 2013 at 8:47

Simplest (also remove file after archive creation):
find *.1  -exec tar czf '{}.tgz' '{}' --remove-files \;

[Aug 06, 2019] backup - Fastest way combine many files into one (tar czf is too slow) - Unix Linux Stack Exchange

Aug 06, 2019 | unix.stackexchange.com

Fastest way combine many files into one (tar czf is too slow) Ask Question Asked 7 years, 11 months ago Active 21 days ago Viewed 32k times 22 5


Gilles ,Nov 5, 2013 at 0:05

Currently I'm running tar czf to combine backup files. The files are in a specific directory.

But the number of files is growing. Using tzr czf takes too much time (more than 20 minutes and counting).

I need to combine the files more quickly and in a scalable fashion.

I've found genisoimage , readom and mkisofs . But I don't know which is fastest and what the limitations are for each of them.

Rufo El Magufo ,Aug 24, 2017 at 7:56

You should check if most of your time are being spent on CPU or in I/O. Either way, there are ways to improve it:

A: don't compress

You didn't mention "compression" in your list of requirements so try dropping the "z" from your arguments list: tar cf . This might be speed up things a bit.

There are other techniques to speed-up the process, like using "-N " to skip files you already backed up before.

B: backup the whole partition with dd

Alternatively, if you're backing up an entire partition, take a copy of the whole disk image instead. This would save processing and a lot of disk head seek time. tar and any other program working at a higher level have a overhead of having to read and process directory entries and inodes to find where the file content is and to do more head disk seeks , reading each file from a different place from the disk.

To backup the underlying data much faster, use:

dd bs=16M if=/dev/sda1 of=/another/filesystem

(This assumes you're not using RAID, which may change things a bit)

,

To repeat what others have said: we need to know more about the files that are being backed up. I'll go with some assumptions here. Append to the tar file

If files are only being added to the directories (that is, no file is being deleted), make sure you are appending to the existing tar file rather than re-creating it every time. You can do this by specifying the existing archive filename in your tar command instead of a new one (or deleting the old one).

Write to a different disk

Reading from the same disk you are writing to may be killing performance. Try writing to a different disk to spread the I/O load. If the archive file needs to be on the same disk as the original files, move it afterwards.

Don't compress

Just repeating what @Yves said. If your backup files are already compressed, there's not much need to compress again. You'll just be wasting CPU cycles.

[Aug 04, 2019] 10 YAML tips for people who hate YAML Enable SysAdmin

Aug 04, 2019 | www.redhat.com

10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain.

Posted June 10, 2019 | by Seth Kenlon (Red Hat)

Image
There are lots of formats for configuration files: a list of values, key and value pairs, INI files, YAML, JSON, XML, and many more. Of these, YAML sometimes gets cited as a particularly difficult one to handle for a few different reasons. While its ability to reflect hierarchical values is significant and its minimalism can be refreshing to some, its Python-like reliance upon syntactic whitespace can be frustrating.

However, the open source world is diverse and flexible enough that no one has to suffer through abrasive technology, so if you hate YAML, here are 10 things you can (and should!) do to make it tolerable. Starting with zero, as any sensible index should.

0. Make your editor do the work

Whatever text editor you use probably has plugins to make dealing with syntax easier. If you're not using a YAML plugin for your editor, find one and install it. The effort you spend on finding a plugin and configuring it as needed will pay off tenfold the very next time you edit YAML.

For example, the Atom editor comes with a YAML mode by default, and while GNU Emacs ships with minimal support, you can add additional packages like yaml-mode to help.

Emacs in YAML and whitespace mode.

If your favorite text editor lacks a YAML mode, you can address some of your grievances with small configuration changes. For instance, the default text editor for the GNOME desktop, Gedit, doesn't have a YAML mode available, but it does provide YAML syntax highlighting by default and features configurable tab width:

Configuring tab width and type in Gedit.

With the drawspaces Gedit plugin package, you can make white space visible in the form of leading dots, removing any question about levels of indentation.

Take some time to research your favorite text editor. Find out what the editor, or its community, does to make YAML easier, and leverage those features in your work. You won't be sorry.

1. Use a linter

Ideally, programming languages and markup languages use predictable syntax. Computers tend to do well with predictability, so the concept of a linter was invented in 1978. If you're not using a linter for YAML, then it's time to adopt this 40-year-old tradition and use yamllint .

You can install yamllint on Linux using your distribution's package manager. For instance, on Red Hat Enterprise Linux 8 or Fedora :

$ sudo dnf install yamllint

Invoking yamllint is as simple as telling it to check a file. Here's an example of yamllint 's response to a YAML file containing an error:

$ yamllint errorprone.yaml
errorprone.yaml
23:10     error    syntax error: mapping values are not allowed here
23:11     error    trailing spaces  (trailing-spaces)

That's not a time stamp on the left. It's the error's line and column number. You may or may not understand what error it's talking about, but now you know the error's location. Taking a second look at the location often makes the error's nature obvious. Success is eerily silent, so if you want feedback based on the lint's success, you can add a conditional second command with a double-ampersand ( && ). In a POSIX shell, && fails if a command returns anything but 0, so upon success, your echo command makes that clear. This tactic is somewhat superficial, but some users prefer the assurance that the command did run correctly, rather than failing silently. Here's an example:

$ yamllint perfect.yaml && echo "OK"
OK

The reason yamllint is so silent when it succeeds is that it returns 0 errors when there are no errors.

2. Write in Python, not YAML

If you really hate YAML, stop writing in YAML, at least in the literal sense. You might be stuck with YAML because that's the only format an application accepts, but if the only requirement is to end up in YAML, then work in something else and then convert. Python, along with the excellent pyyaml library, makes this easy, and you have two methods to choose from: self-conversion or scripted.

Self-conversion

In the self-conversion method, your data files are also Python scripts that produce YAML. This works best for small data sets. Just write your JSON data into a Python variable, prepend an import statement, and end the file with a simple three-line output statement.

#!/usr/bin/python3	
import yaml 

d={
"glossary": {
  "title": "example glossary",
  "GlossDiv": {
	"title": "S",
	"GlossList": {
	  "GlossEntry": {
		"ID": "SGML",
		"SortAs": "SGML",
		"GlossTerm": "Standard Generalized Markup Language",
		"Acronym": "SGML",
		"Abbrev": "ISO 8879:1986",
		"GlossDef": {
		  "para": "A meta-markup language, used to create markup languages such as DocBook.",
		  "GlossSeeAlso": ["GML", "XML"]
		  },
		"GlossSee": "markup"
		}
	  }
	}
  }
}

f=open('output.yaml','w')
f.write(yaml.dump(d))
f.close

Run the file with Python to produce a file called output.yaml file.

$ python3 ./example.json
$ cat output.yaml
glossary:
  GlossDiv:
	GlossList:
	  GlossEntry:
		Abbrev: ISO 8879:1986
		Acronym: SGML
		GlossDef:
		  GlossSeeAlso: [GML, XML]
		  para: A meta-markup language, used to create markup languages such as DocBook.
		GlossSee: markup
		GlossTerm: Standard Generalized Markup Language
		ID: SGML
		SortAs: SGML
	title: S
  title: example glossary

This output is perfectly valid YAML, although yamllint does issue a warning that the file is not prefaced with --- , which is something you can adjust either in the Python script or manually.

Scripted conversion

In this method, you write in JSON and then run a Python conversion script to produce YAML. This scales better than self-conversion, because it keeps the converter separate from the data.

Create a JSON file and save it as example.json . Here is an example from json.org :

{
	"glossary": {
	  "title": "example glossary",
	  "GlossDiv": {
		"title": "S",
		"GlossList": {
		  "GlossEntry": {
			"ID": "SGML",
			"SortAs": "SGML",
			"GlossTerm": "Standard Generalized Markup Language",
			"Acronym": "SGML",
			"Abbrev": "ISO 8879:1986",
			"GlossDef": {
			  "para": "A meta-markup language, used to create markup languages such as DocBook.",
			  "GlossSeeAlso": ["GML", "XML"]
			  },
			"GlossSee": "markup"
			}
		  }
		}
	  }
	}

Create a simple converter and save it as json2yaml.py . This script imports both the YAML and JSON Python modules, loads a JSON file defined by the user, performs the conversion, and then writes the data to output.yaml .

#!/usr/bin/python3
import yaml
import sys
import json

OUT=open('output.yaml','w')
IN=open(sys.argv[1], 'r')

JSON = json.load(IN)
IN.close()
yaml.dump(JSON, OUT)
OUT.close()

Save this script in your system path, and execute as needed:

$ ~/bin/json2yaml.py example.json
3. Parse early, parse often

Sometimes it helps to look at a problem from a different angle. If your problem is YAML, and you're having a difficult time visualizing the data's relationships, you might find it useful to restructure that data, temporarily, into something you're more familiar with.

If you're more comfortable with dictionary-style lists or JSON, for instance, you can convert YAML to JSON in two commands using an interactive Python shell. Assume your YAML file is called mydata.yaml .

$ python3
>>> f=open('mydata.yaml','r')
>>> yaml.load(f)
{'document': 34843, 'date': datetime.date(2019, 5, 23), 'bill-to': {'given': 'Seth', 'family': 'Kenlon', 'address': {'street': '51b Mornington Road\n', 'city': 'Brooklyn', 'state': 'Wellington', 'postal': 6021, 'country': 'NZ'}}, 'words': 938, 'comments': 'Good article. Could be better.'}

There are many other examples, and there are plenty of online converters and local parsers, so don't hesitate to reformat data when it starts to look more like a laundry list than markup.

4. Read the spec

After I've been away from YAML for a while and find myself using it again, I go straight back to yaml.org to re-read the spec. If you've never read the specification for YAML and you find YAML confusing, a glance at the spec may provide the clarification you never knew you needed. The specification is surprisingly easy to read, with the requirements for valid YAML spelled out with lots of examples in chapter 6 .

5. Pseudo-config

Before I started writing my book, Developing Games on the Raspberry Pi , Apress, 2019, the publisher asked me for an outline. You'd think an outline would be easy. By definition, it's just the titles of chapters and sections, with no real content. And yet, out of the 300 pages published, the hardest part to write was that initial outline.

YAML can be the same way. You may have a notion of the data you need to record, but that doesn't mean you fully understand how it's all related. So before you sit down to write YAML, try doing a pseudo-config instead.

A pseudo-config is like pseudo-code. You don't have to worry about structure or indentation, parent-child relationships, inheritance, or nesting. You just create iterations of data in the way you currently understand it inside your head.

A pseudo-config.

Once you've got your pseudo-config down on paper, study it, and transform your results into valid YAML.

6. Resolve the spaces vs. tabs debate

OK, maybe you won't definitively resolve the spaces-vs-tabs debate , but you should at least resolve the debate within your project or organization. Whether you resolve this question with a post-process sed script, text editor configuration, or a blood-oath to respect your linter's results, anyone in your team who touches a YAML project must agree to use spaces (in accordance with the YAML spec).

Any good text editor allows you to define a number of spaces instead of a tab character, so the choice shouldn't negatively affect fans of the Tab key.

Tabs and spaces are, as you probably know all too well, essentially invisible. And when something is out of sight, it rarely comes to mind until the bitter end, when you've tested and eliminated all of the "obvious" problems. An hour wasted to an errant tab or group of spaces is your signal to create a policy to use one or the other, and then to develop a fail-safe check for compliance (such as a Git hook to enforce linting).

7. Less is more (or more is less)

Some people like to write YAML to emphasize its structure. They indent vigorously to help themselves visualize chunks of data. It's a sort of cheat to mimic markup languages that have explicit delimiters.

Here's a good example from Ansible's documentation :

# Employee records
-  martin:
        name: Martin D'vloper
        job: Developer
        skills:
            - python
            - perl
            - pascal
-  tabitha:
        name: Tabitha Bitumen
        job: Developer
        skills:
            - lisp
            - fortran
            - erlang

For some users, this approach is a helpful way to lay out a YAML document, while other users miss the structure for the void of seemingly gratuitous white space.

If you own and maintain a YAML document, then you get to define what "indentation" means. If blocks of horizontal white space distract you, then use the minimal amount of white space required by the YAML spec. For example, the same YAML from the Ansible documentation can be represented with fewer indents without losing any of its validity or meaning:

---
- martin:
   name: Martin D'vloper
   job: Developer
   skills:
   - python
   - perl
   - pascal
- tabitha:
   name: Tabitha Bitumen
   job: Developer
   skills:
   - lisp
   - fortran
   - erlang
8. Make a recipe

I'm a big fan of repetition breeding familiarity, but sometimes repetition just breeds repeated stupid mistakes. Luckily, a clever peasant woman experienced this very phenomenon back in 396 AD (don't fact-check me), and invented the concept of the recipe .

If you find yourself making YAML document mistakes over and over, you can embed a recipe or template in the YAML file as a commented section. When you're adding a section, copy the commented recipe and overwrite the dummy data with your new real data. For example:

---
# - <common name>:
#   name: Given Surname
#   job: JOB
#   skills:
#   - LANG
- martin:
  name: Martin D'vloper
  job: Developer
  skills:
  - python
  - perl
  - pascal
- tabitha:
  name: Tabitha Bitumen
  job: Developer
  skills:
  - lisp
  - fortran
  - erlang
9. Use something else

I'm a fan of YAML, generally, but sometimes YAML isn't the answer. If you're not locked into YAML by the application you're using, then you might be better served by some other configuration format. Sometimes config files outgrow themselves and are better refactored into simple Lua or Python scripts.

YAML is a great tool and is popular among users for its minimalism and simplicity, but it's not the only tool in your kit. Sometimes it's best to part ways. One of the benefits of YAML is that parsing libraries are common, so as long as you provide migration options, your users should be able to adapt painlessly.

If YAML is a requirement, though, keep these tips in mind and conquer your YAML hatred once and for all! What to read next

[Aug 04, 2019] Ansible IT automation for everybody Enable SysAdmin

Aug 04, 2019 | www.redhat.com

Skip to main content We use cookies on our websites to deliver our online services. Details about how we use cookies and how you may disable them are set out in our Privacy Statement . By using this website you agree to our use of cookies. × Search Enable SysAdmin

Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks.

Posted July 31, 2019 | by Jörg Kastning

Image

Ansible is an open source tool for software provisioning, application deployment, orchestration, configuration, and administration. Its purpose is to help you automate your configuration processes and simplify the administration of multiple systems. Thus, Ansible essentially pursues the same goals as Puppet, Chef, or Saltstack.

What I like about Ansible is that it's flexible, lean, and easy to start with. In most use cases, it keeps the job simple.

I chose to use Ansible back in 2016 because no agent has to be installed on the managed nodes -- a node is what Ansible calls a managed remote system. All you need to start managing a remote system with Ansible is SSH access to the system, and Python installed on it. Python is preinstalled on most Linux systems, and I was already used to managing my hosts via SSH, so I was ready to start right away. And if the day comes where I decide not to use Ansible anymore, I just have to delete my Ansible controller machine (control node) and I'm good to go. There are no agents left on the managed nodes that have to be removed.

Ansible offers two ways to control your nodes. The first one uses playbooks . These are simple ASCII files written in Yet Another Markup Language (YAML) , which is easy to read and write. And second, there are the ad-hoc commands , which allow you to run a command or module without having to create a playbook first.

You organize the hosts you would like to manage and control in an inventory file, which offers flexible format options. For example, this could be an INI-like file that looks like:

mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

[site1:children]
webservers
dbservers
Examples

I would like to give you two small examples of how to use Ansible. I started with these really simple tasks before I used Ansible to take control of more complex tasks in my infrastructure.

Ad-hoc: Check if Ansible can remote manage a system

As you might recall from the beginning of this article, all you need to manage a remote host is SSH access to it, and a working Python interpreter on it. To check if these requirements are fulfilled, run the following ad-hoc command against a host from your inventory:

[jkastning@ansible]$ ansible mail.example.com -m ping
mail.example.com | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
Playbook: Register a system and attach a subscription

This example shows how to use a playbook to keep installed packages up to date. The playbook is an ASCII text file which looks like this:

---
# Make sure all packages are up to date
- name: Update your system
  hosts: mail.example.com
  tasks:
  - name: Make sure all packages are up to date
    yum:
      name: "*"
      state: latest

Now, we are ready to run the playbook:

[jkastning@ansible]$ ansible-playbook yum_update.yml 

PLAY [Update your system] **************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [mail.example.com]

TASK [Make sure all packages are up to date] *******************************************************
ok: [mail.example.com]

PLAY RECAP *****************************************************************************************
mail.example.com : ok=2    changed=0    unreachable=0    failed=0

Here everything is ok and there is nothing else to do. All installed packages are already the latest version.

It's simple: Try and use it

The examples above are quite simple and should only give you a first impression. But, from the start, it did not take me long to use Ansible for more complex tasks like the Poor Man's RHEL Mirror or the Ansible Role for RHEL Patchmanagment .

Today, Ansible saves me a lot of time and supports my day-to-day work tasks quite well. So what are you waiting for? Try it, use it, and feel a bit more comfortable at work. What to read next Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat) Topics: Automation Ansible AUTOMATION FOR EVERYONE

Getting started with Ansible Get started Jörg Kastning Joerg is a sysadmin for over ten years now. He is a member of the Red Hat Accelerators and runs his own blog at https://www.my-it-brain.de. More about me Related Content Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat)

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

Twitter Facebook 0 LinkedIn Reddit 33 Email Twitter Facebook 0 LinkedIn Reddit 33 Email x Subscribe now

Get the highlights in your inbox every week.

https://www.redhat.com/sysadmin/eloqua-embedded-email-capture-block.html?offer_id=701f20000012gE7AAI ✓ Thanks for sharing! Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Telegram Pocket Mix Tumblr Amazon Wish List AOL Mail Balatarin BibSonomy Bitty Browser Blinklist Blogger BlogMarks Bookmarks.fr Box.net Buffer Care2 News CiteULike Copy Link Design Float Diary.Ru Diaspora Digg Diigo Douban Draugiem DZone Evernote Facebook Messenger Fark Flipboard Folkd Google Bookmarks Google Classroom Google+ Hacker News Hatena Houzz Instapaper Kakao Kik Kindle It Known Line LiveJournal Mail.Ru Mastodon Mendeley Meneame Mixi MySpace Netvouz Odnoklassniki Outlook.com Papaly Pinboard Plurk Print PrintFriendly Protopage Bookmarks Pusha Qzone Rediff MyPage Refind Renren Sina Weibo SiteJot Skype Slashdot SMS StockTwits Svejo Symbaloo Bookmarks Threema Trello Tuenti Twiddla TypePad Post Viadeo Viber VK Wanelo WeChat WordPress Wykop XING Yahoo Mail Yoolink Yummly AddToAny Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Email Gmail AOL Mail Outlook.com Yahoo Mail More

https://static.addtoany.com/menu/sm.21.html#type=page&event=load&url=https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody&referrer=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fansible-it-automation-for-everybody-190731052032.html

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody

[Aug 03, 2019] Creating Bootable Linux USB Drive with Etcher

Aug 03, 2019 | linuxize.com

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.

Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.

Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:

  1. Connect the USB flash drive to your system and Launch Etcher.
  2. Click on the Select image button and locate the distribution .iso file.
  3. If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.

[Aug 02, 2019] linux - How to tar directory and then remove originals including the directory - Super User

Aug 02, 2019 | superuser.com

How to tar directory and then remove originals including the directory? Ask Question Asked 9 years, 6 months ago Active 4 years, 6 months ago Viewed 124k times 28 7


mit ,Dec 7, 2016 at 1:22

I'm trying to tar a collection of files in a directory called 'my_directory' and remove the originals by using the command:
tar -cvf files.tar my_directory --remove-files

However it is only removing the individual files inside the directory and not the directory itself (which is what I specified in the command). What am I missing here?

EDIT:

Yes, I suppose the 'remove-files' option is fairly literal. Although I too found the man page unclear on that point. (In linux I tend not to really distinguish much between directories and files that much, and forget sometimes that they are not the same thing). It looks like the consensus is that it doesn't remove directories.

However, my major prompting point for asking this question stems from tar's handling of absolute paths. Because you must specify a relative path to a file/s to be compressed, you therefore must change to the parent directory to tar it properly. As I see it using any kind of follow-on 'rm' command is potentially dangerous in that situation. Thus I was hoping to simplify things by making tar itself do the remove.

For example, imagine a backup script where the directory to backup (ie. tar) is included as a shell variable. If that shell variable value was badly entered, it is possible that the result could be deleted files from whatever directory you happened to be in last.

Arjan ,Feb 13, 2016 at 13:08

You are missing the part which says the --remove-files option removes files after adding them to the archive.

You could follow the archive and file-removal operation with a command like,

find /path/to/be/archived/ -depth -type d -empty -exec rmdir {} \;


Update: You may be interested in reading this short Debian discussion on,
Bug 424692: --remove-files complains that directories "changed as we read it" .

Kim ,Feb 13, 2016 at 13:08

Since the --remove-files option only removes files , you could try
tar -cvf files.tar my_directory && rm -R my_directory

so that the directory is removed only if the tar returns an exit status of 0

redburn ,Feb 13, 2016 at 13:08

Have you tried to put --remove-files directive after archive name? It works for me.
tar -cvf files.tar --remove-files my_directory

shellking ,Oct 4, 2010 at 19:58

source={directory argument}

e.g.

source={FULL ABSOLUTE PATH}/my_directory
parent={parent directory of argument}

e.g.

parent={ABSOLUTE PATH of 'my_directory'/
logFile={path to a run log that captures status messages}

Then you could execute something along the lines of:

cd ${parent}

tar cvf Tar_File.`date%Y%M%D_%H%M%S` ${source}

if [ $? != 0 ]

then

 echo "Backup FAILED for ${source} at `date` >> ${logFile}

else

 echo "Backup SUCCESS for ${source} at `date` >> ${logFile}

 rm -rf ${source}

fi

mit ,Nov 14, 2011 at 13:21

This was probably a bug.

Also the word "file" is ambigous in this case. But because this is a command line switch I would it expect to mean also directories, because in unix/lnux everything is a file, also a directory. (The other interpretation is of course also valid, but It makes no sense to keep directories in such a case. I would consider it unexpected and confusing behavior.)

But I have found that in gnu tar on some distributions gnu tar actually removes the directory tree. Another indication that keeping the tree was a bug. Or at least some workaround until they fixed it.

This is what I tried out on an ubuntu 10.04 console:

mit:/var/tmp$ mkdir tree1                                                                                               
mit:/var/tmp$ mkdir tree1/sub1                                                                                          
mit:/var/tmp$ > tree1/sub1/file1                                                                                        

mit:/var/tmp$ ls -la                                                                                                    
drwxrwxrwt  4 root root 4096 2011-11-14 15:40 .                                                                              
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
drwxr-xr-x  3 mit  mit  4096 2011-11-14 15:40 tree1

mit:/var/tmp$ tar -czf tree1.tar.gz tree1/ --remove-files

# AS YOU CAN SEE THE TREE IS GONE NOW:

mit:/var/tmp$ ls -la
drwxrwxrwt  3 root root 4096 2011-11-14 15:41 .
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
-rw-r--r--  1 mit   mit    159 2011-11-14 15:41 tree1.tar.gz                                                                   


mit:/var/tmp$ tar --version                                                                                             
tar (GNU tar) 1.22                                                                                                           
Copyright © 2009 Free Software Foundation, Inc.

If you want to see it on your machine, paste this into a console at your own risk:

tar --version                                                                                             
cd /var/tmp
mkdir -p tree1/sub1                                                                                          
> tree1/sub1/file1                                                                                        
tar -czf tree1.tar.gz tree1/ --remove-files
ls -la

[Jul 31, 2019] Mounting archives with FUSE and archivemount Linux.com The source for Linux information

Jul 31, 2019 | www.linux.com

Mounting archives with FUSE and archivemount Author: Ben Martin The archivemount FUSE filesystem lets you mount a possibly compressed tarball as a filesystem. Because FUSE exposes its filesystems through the Linux kernel, you can use any application to load and save files directly into such mounted archives. This lets you use your favourite text editor, image viewer, or music player on files that are still inside an archive file. Going one step further, because archivemount also supports write access for some archive formats, you can edit a text file directly from inside an archive too.

I couldn't find any packages that let you easily install archivemount for mainstream distributions. Its distribution includes a single source file and a Makefile.

archivemount depends on libarchive for the heavy lifting. Packages of libarchive exist for Ubuntu Gutsy and openSUSE for not for Fedora. To compile libarchive you need to have uudecode installed; my version came with the sharutils package on Fedora 8. Once you have uudecode, you can build libarchive using the standard ./configure; make; sudo make install process.

With libarchive installed, either from source or from packages, simply invoke make to build archivemount itself. To install archivemount, copy its binary into /usr/local/bin and set permissions appropriately. A common setup on Linux distributions is to have a fuse group that a user must be a member of in order to mount a FUSE filesystem. It makes sense to have the archivemount command owned by this group as a reminder to users that they require that permission in order to use the tool. Setup is shown below:

# cp -av archivemount /usr/local/bin/
# chown root:fuse /usr/local/bin/archivemount
# chmod 550 /usr/local/bin/archivemount

To show how you can use archivemount I'll first create a trivial compressed tarball, then mount it with archivemount. You can then explore the directory structure of the contents of the tarball with the ls command, and access a file from the archive directly with cat.

$ mkdir -p /tmp/archivetest
$ cd /tmp/archivetest
$ date >datefile1
$ date >datefile2
$ mkdir subA
$ date >subA/foobar
$ cd /tmp
$ tar czvf archivetest.tar.gz archivetest
$ mkdir testing
$ archivemount archivetest.tar.gz testing
$ ls -l testing/archivetest/
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile1
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile2
drwxr-xr-x 0 root root 0 2008-04-02 21:04 subA
$ cat testing/archivetest/datefile2
Wed Apr 2 21:04:08 EST 2008

Next, I'll create a new file in the archive and read its contents back again. Notice that the first use of the tar command directly on the tarball does not show that the newly created file is in the archive. This is because archivemount delays all write operations until the archive is unmounted. After issuing the fusermount -u command, the new file is added to the archive itself.

$ date > testing/archivetest/new-file1
$ cat testing/archivetest/new-file1
Wed Apr 2 21:12:07 EST 2008
$ tar tzvf archivetest.tar.gz
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile2
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile1
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/subA/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/subA/foobar

$ fusermount -u testing
$ tar tzvf archivetest.tar.gz
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile2
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile1
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/subA/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/subA/foobar
-rw-rw-r-- ben/ben 29 2008-04-02 21:12 archivetest/new-file1

When you unmount a FUSE filesystem, the unmount command can return before the FUSE filesystem has fully exited. This can lead to a situation where the FUSE filesystem might run into an error in some processing but not have a good place to report that error. The archivemount documentation warns that if there is an error writing changes to an archive during unmount then archivemount cannot be blamed for a loss of data. Things are not quite as grim as they sound though. I mounted a tar.gz archive to which I had only read access and attempted to create new files and write to existing ones. The operations failed immediately with a "Read-only filesystem" message.

In an effort to trick archivemount into losing data, I created an archive in a format that libarchive has only read support for. I created archivetest.zip with the original contents of the archivetest directory and mounted it. Creating a new file worked, and reading it back was fine. As expected from the warnings on the README file for archivemount, I did not see any error message when I unmounted the zip file. However, attempting to view the manifest of the zip file with unzip -l failed. It turns out that my archivemount operations had turned the file into archivetest.zip, which was now a non-compressed POSIX tar archive. Using tar tvf I saw that the manifest of the archivetest.zip tar archive included the contents including the new file that I created. There was also a archivetest.zip.orig which was in zip format and contained the contents of the zip archive when I mounted it with archivemount.

So it turns out to be fairly tricky to get archivemount to lose data. Mounting a read-only archive file didn't work, and modifying an archive format that libarchive could only read from didn't work, though in the last case you will have to contend with the archive format being silently changed. One other situation could potentially trip you up: Because archivemount creates a new archive at unmount time, you should make sure that you will not run out of disk space where the archives are stored.

To test archivemount's performance, I used the bonnie++ filesystem benchmark version 1.03. Because archivemount holds off updating the actual archive until the filesystem is unmounted, you will get good performance when accessing and writing to a mounted archive. As shown below, when comparing the use of archivemount on an archive file stored in /tmp to direct access to a subdirectory in /tmp, seek times for archivemount were halved on average relative to direct access, and you can expect about 70% of the performance of direct access when using archivemount for rewriting. The bonnie++ documentation explains that for the rewrite test, a chunk of data is a read, dirtied, and written back to a file, and this requires a seek, so archivemount's slower seek performance likely causes this benchmark to be slower as well.

$ cd /tmp
$ mkdir empty
$ ls -d empty | cpio -ov > empty.cpio
$ mkdir empty-mounted
$ archivemount empty.cpio empty-mounted
$ mkdir bonnie-test
$ /usr/sbin/bonnie++ -d /tmp/bonnie-test
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v8tsrv 2G 14424 25 14726 4 13930 6 28502 49 52581 17 8322 123

$ /usr/sbin/bonnie++ -d /tmp/empty-mounted
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v8tsrv 2G 12016 19 12918 7 9766 6 27543 40 52937 6 4457 24

When you want to pluck a few files out of a tarball, archivemount might be just the command for the job. Instead of expanding the archive into /tmp just to load a few files into Emacs, just mount the archive and run Emacs directly on the archivemount filesystem. As the bonnie++ benchmarks above show, an application using an archivemount filesystem does not necessarily suffer a performance hit.

[Jul 31, 2019] Advanced GNU tar Operations

Jul 31, 2019 | www.gnu.org

GNU tar 1.32 4.2

In the last chapter, you learned about the first three operations to tar . This chapter presents the remaining five operations to tar : `--append' , `--update' , `--concatenate' , `--delete' , and `--compare' .

You are not likely to use these operations as frequently as those covered in the last chapter; however, since they perform specialized functions, they are quite useful when you do need to use them. We will give examples using the same directory and files that you created in the last chapter. As you may recall, the directory is called `practice' , the files are `jazz' , `blues' , `folk' , and the two archive files you created are `collection.tar' and `music.tar' .

We will also use the archive files `afiles.tar' and `bfiles.tar' . The archive `afiles.tar' contains the members `apple' , `angst' , and `aspic' ; `bfiles.tar' contains the members `./birds' , `baboon' , and `./box' .

Unless we state otherwise, all practicing you do and examples you follow in this chapter will take place in the `practice' directory that you created in the previous chapter; see Preparing a Practice Directory for Examples . (Below in this section, we will remind you of the state of the examples where the last chapter left them.)

The five operations that we will cover in this chapter are:

`--append'
`-r'
Add new entries to an archive that already exists.
`--update'
`-u'
Add more recent copies of archive members to the end of an archive, if they exist.
`--concatenate'
`--catenate'
`-A'
Add one or more pre-existing archives to the end of another archive.
`--delete'
Delete items from an archive (does not work on tapes).
`--compare'
`--diff'
`-d'
Compare archive members to their counterparts in the file system.

[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.2 How to Add Files to Existing Archives: `--append'

If you want to add files to an existing archive, you don't need to create a new archive; you can use `--append' ( `-r' ). The archive must already exist in order to use `--append' . (A related operation is the `--update' operation; you can use this to add newer versions of archive members to an existing archive. To learn how to do this with `--update' , see section Updating an Archive .)

If you use `--append' to add a file that has the same name as an archive member to an archive containing that archive member, then the old member is not deleted. What does happen, however, is somewhat complex. tar allows you to have infinite number of files with the same name. Some operations treat these same-named members no differently than any other set of archive members: for example, if you view an archive with `--list' ( `-t' ), you will see all of those members listed, with their data modification times, owners, etc.

Other operations don't deal with these members as perfectly as you might prefer; if you were to use `--extract' to extract the archive, only the most recently added copy of a member with the same name as other members would end up in the working directory. This is because `--extract' extracts an archive in the order the members appeared in the archive; the most recently archived members will be extracted last. Additionally, an extracted member will replace a file of the same name which existed in the directory already, and tar will not prompt you about this (10) . Thus, only the most recently archived member will end up being extracted, as it will replace the one extracted before it, and so on.

There exists a special option that allows you to get around this behavior and extract (or list) only a particular copy of the file. This is `--occurrence' option. If you run tar with this option, it will extract only the first copy of the file. You may also give this option an argument specifying the number of copy to be extracted. Thus, for example if the archive `archive.tar' contained three copies of file `myfile' , then the command

tar --extract --file archive.tar --occurrence=2 myfile

would extract only the second copy. See section --occurrence , for the description of `--occurrence' option.

See hag - you might want to incorporate some of the above into the MMwtSN node; not sure. i didn't know how to make it simpler...

There are a few ways to get around this. Xref to Multiple Members with the Same Name, maybe.

If you want to replace an archive member, use `--delete' to delete the member you want to remove from the archive, and then use `--append' to add the member you want to be in the archive. Note that you can not change the order of the archive; the most recently added member will still appear last. In this sense, you cannot truly "replace" one member with another. (Replacing one member with another will not work on certain types of media, such as tapes; see Removing Archive Members Using `--delete' and Tapes and Other Archive Media , for more information.)

4.2.2.1 Appending Files to an Archive
4.2.2.2 Multiple Members with the Same Name
4.2.2.1 Appending Files to an Archive

The simplest way to add a file to an already existing archive is the `--append' ( `-r' ) operation, which writes specified files into the archive whether or not they are already among the archived files.

When you use `--append' , you must specify file name arguments, as there is no default. If you specify a file that already exists in the archive, another copy of the file will be added to the end of the archive. As with other operations, the member names of the newly added files will be exactly the same as their names given on the command line. The `--verbose' ( `-v' ) option will print out the names of the files as they are written into the archive.

`--append' cannot be performed on some tape drives, unfortunately, due to deficiencies in the formats those tape drives use. The archive must be a valid tar archive, or else the results of using this operation will be unpredictable. See section Tapes and Other Archive Media .

To demonstrate using `--append' to add a file to an archive, create a file called `rock' in the `practice' directory. Make sure you are in the `practice' directory. Then, run the following tar command to add `rock' to `collection.tar' :

$ tar --append --file=collection.tar rock

If you now use the `--list' ( `-t' ) operation, you will see that `rock' has been added to the archive:

$ tar --list --file=collection.tar
-rw-r--r-- me/user          28 1996-10-18 16:31 jazz
-rw-r--r-- me/user          21 1996-09-23 16:44 blues
-rw-r--r-- me/user          20 1996-09-23 16:44 folk
-rw-r--r-- me/user          20 1996-09-23 16:44 rock
4.2.2.2 Multiple Members with the Same Name

You can use `--append' ( `-r' ) to add copies of files which have been updated since the archive was created. (However, we do not recommend doing this since there is another tar option called `--update' ; See section Updating an Archive , for more information. We describe this use of `--append' here for the sake of completeness.) When you extract the archive, the older version will be effectively lost. This works because files are extracted from an archive in the order in which they were archived. Thus, when the archive is extracted, a file archived later in time will replace a file of the same name which was archived earlier, even though the older version of the file will remain in the archive unless you delete all versions of the file.

Supposing you change the file `blues' and then append the changed version to `collection.tar' . As you saw above, the original `blues' is in the archive `collection.tar' . If you change the file and append the new version of the file to the archive, there will be two copies in the archive. When you extract the archive, the older version of the file will be extracted first, and then replaced by the newer version when it is extracted.

You can append the new, changed copy of the file `blues' to the archive in this way:

$ tar --append --verbose --file=collection.tar blues
blues

Because you specified the `--verbose' option, tar has printed the name of the file being appended as it was acted on. Now list the contents of the archive:

$ tar --list --verbose --file=collection.tar
-rw-r--r-- me/user          28 1996-10-18 16:31 jazz
-rw-r--r-- me/user          21 1996-09-23 16:44 blues
-rw-r--r-- me/user          20 1996-09-23 16:44 folk
-rw-r--r-- me/user          20 1996-09-23 16:44 rock
-rw-r--r-- me/user          58 1996-10-24 18:30 blues

The newest version of `blues' is now at the end of the archive (note the different creation dates and file sizes). If you extract the archive, the older version of the file `blues' will be replaced by the newer version. You can confirm this by extracting the archive and running `ls' on the directory.

If you wish to extract the first occurrence of the file `blues' from the archive, use `--occurrence' option, as shown in the following example:

$ tar --extract -vv --occurrence --file=collection.tar blues
-rw-r--r-- me/user          21 1996-09-23 16:44 blues

See section Changing How tar Writes Files , for more information on `--extract' and see -occurrence , for a description of `--occurrence' option.

4.2.3 Updating an Archive

In the previous section, you learned how to use `--append' to add a file to an existing archive. A related operation is `--update' ( `-u' ). The `--update' operation updates a tar archive by comparing the date of the specified archive members against the date of the file with the same name. If the file has been modified more recently than the archive member, then the newer version of the file is added to the archive (as with `--append' ).

Unfortunately, you cannot use `--update' with magnetic tape drives. The operation will fail.

See other examples of media on which -update will fail? need to ask charles and/or mib/thomas/dave shevett..

Both `--update' and `--append' work by adding to the end of the archive. When you extract a file from the archive, only the version stored last will wind up in the file system, unless you use the `--backup' option. See section Multiple Members with the Same Name , for a detailed discussion.

4.2.3.1 How to Update an Archive Using `--update'

You must use file name arguments with the `--update' ( `-u' ) operation. If you don't specify any files, tar won't act on any files and won't tell you that it didn't do anything (which may end up confusing you).

To see the `--update' option at work, create a new file, `classical' , in your practice directory, and some extra text to the file `blues' , using any text editor. Then invoke tar with the `update' operation and the `--verbose' ( `-v' ) option specified, using the names of all the files in the `practice' directory as file name arguments:

$ tar --update -v -f collection.tar blues folk rock classical
blues
classical
$

Because we have specified verbose mode, tar prints out the names of the files it is working on, which in this case are the names of the files that needed to be updated. If you run `tar --list' and look at the archive, you will see `blues' and `classical' at its end. There will be a total of two versions of the member `blues' ; the one at the end will be newer and larger, since you added text before updating it.

The reason tar does not overwrite the older file when updating it is that writing to the middle of a section of tape is a difficult process. Tapes are not designed to go backward. See section Tapes and Other Archive Media , for more information about tapes.

`--update' ( `-u' ) is not suitable for performing backups for two reasons: it does not change directory content entries, and it lengthens the archive every time it is used. The GNU tar options intended specifically for backups are more efficient. If you need to run backups, please consult Performing Backups and Restoring Files .


[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.4 Combining Archives with `--concatenate'

Sometimes it may be convenient to add a second archive onto the end of an archive rather than adding individual files to the archive. To add one or more archives to the end of another archive, you should use the `--concatenate' ( `--catenate' , `-A' ) operation.

To use `--concatenate' , give the first archive with `--file' option and name the rest of archives to be concatenated on the command line. The members, and their member names, will be copied verbatim from those archives to the first one (11) . The new, concatenated archive will be called by the same name as the one given with the `--file' option. As usual, if you omit `--file' , tar will use the value of the environment variable TAPE , or, if this has not been set, the default archive name.

See There is no way to specify a new name...

To demonstrate how `--concatenate' works, create two small archives called `bluesrock.tar' and `folkjazz.tar' , using the relevant files from `practice' :

$ tar -cvf bluesrock.tar blues rock
blues
rock
$ tar -cvf folkjazz.tar folk jazz
folk
jazz

If you like, You can run `tar --list' to make sure the archives contain what they are supposed to:

$ tar -tvf bluesrock.tar
-rw-r--r-- melissa/user    105 1997-01-21 19:42 blues
-rw-r--r-- melissa/user     33 1997-01-20 15:34 rock
$ tar -tvf jazzfolk.tar
-rw-r--r-- melissa/user     20 1996-09-23 16:44 folk
-rw-r--r-- melissa/user     65 1997-01-30 14:15 jazz

We can concatenate these two archives with tar :

$ cd ..
$ tar --concatenate --file=bluesrock.tar jazzfolk.tar

If you now list the contents of the `bluesrock.tar' , you will see that now it also contains the archive members of `jazzfolk.tar' :

$ tar --list --file=bluesrock.tar
blues
rock
folk
jazz

When you use `--concatenate' , the source and target archives must already exist and must have been created using compatible format parameters. Notice, that tar does not check whether the archives it concatenates have compatible formats, it does not even check if the files are really tar archives.

Like `--append' ( `-r' ), this operation cannot be performed on some tape drives, due to deficiencies in the formats those tape drives use.

It may seem more intuitive to you to want or try to use cat to concatenate two archives instead of using the `--concatenate' operation; after all, cat is the utility for combining files.

However, tar archives incorporate an end-of-file marker which must be removed if the concatenated archives are to be read properly as one archive. `--concatenate' removes the end-of-archive marker from the target archive before each new archive is appended. If you use cat to combine the archives, the result will not be a valid tar format archive. If you need to retrieve files from an archive that was added to using the cat utility, use the `--ignore-zeros' ( `-i' ) option. See section Ignoring Blocks of Zeros , for further information on dealing with archives improperly combined using the cat shell utility.


[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.5 Removing Archive Members Using `--delete'

You can remove members from an archive by using the `--delete' option. Specify the name of the archive with `--file' ( `-f' ) and then specify the names of the members to be deleted; if you list no member names, nothing will be deleted. The `--verbose' option will cause tar to print the names of the members as they are deleted. As with `--extract' , you must give the exact member names when using `tar --delete' . `--delete' will remove all versions of the named file from the archive. The `--delete' operation can run very slowly.

Unlike other operations, `--delete' has no short form.

This operation will rewrite the archive. You can only use `--delete' on an archive if the archive device allows you to write to any point on the media, such as a disk; because of this, it does not work on magnetic tapes. Do not try to delete an archive member from a magnetic tape; the action will not succeed, and you will be likely to scramble the archive and damage your tape. There is no safe way (except by completely re-writing the archive) to delete files from most kinds of magnetic tape. See section Tapes and Other Archive Media .

To delete all versions of the file `blues' from the archive `collection.tar' in the `practice' directory, make sure you are in that directory, and then,

$ tar --list --file=collection.tar
blues
folk
jazz
rock
$ tar --delete --file=collection.tar blues
$ tar --list --file=collection.tar
folk
jazz
rock

See Check if the above listing is actually produced after running all the examples on collection.tar.

The `--delete' option has been reported to work properly when tar acts as a filter from stdin to stdout .

4.2.6 Comparing Archive Members with the File System

The `--compare' ( `-d' ), or `--diff' operation compares specified archive members against files with the same names, and then reports differences in file size, mode, owner, modification date and contents. You should only specify archive member names, not file names. If you do not name any members, then tar will compare the entire archive. If a file is represented in the archive but does not exist in the file system, tar reports a difference.

You have to specify the record size of the archive when modifying an archive with a non-default record size.

tar ignores files in the file system that do not have corresponding members in the archive.

The following example compares the archive members `rock' , `blues' and `funk' in the archive `bluesrock.tar' with files of the same name in the file system. (Note that there is no file, `funk' ; tar will report an error message.)

$ tar --compare --file=bluesrock.tar rock blues funk
rock
blues
tar: funk not found in archive

The spirit behind the `--compare' ( `--diff' , `-d' ) option is to check whether the archive represents the current state of files on disk, more than validating the integrity of the archive media. For this latter goal, see Verifying Data as It is Stored .

[Jul 30, 2019] The difference between tar and tar.gz archives

With tar.gz to extract a file archiver first creates an intermediary tarball x.tar file from x.tar.gz by uncompressing the whole archive then unpack requested files from this intermediary tarball. In tar.gz archive is large unpacking can take several hours or even days.
Jul 30, 2019 | askubuntu.com

[Jul 29, 2019] A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux

Jul 26, 2019 | www.tecmint.com
... ... ...

How about killing a process using process name

You must be aware of process name, before killing and entering a wrong process name may screw you.

# pkill mysqld
Kill more than one process at a time.
# kill PID1 PID2 PID3

or

# kill -9 PID1 PID2 PID3

or

# kill -SIGKILL PID1 PID2 PID3
What if a process have too many instances and a number of child processes, we have a command ' killall '. This is the only command of this family, which takes process name as argument in-place of process number.

Syntax:

# killall [signal or option] Process Name

To kill all mysql instances along with child processes, use the command as follow.

# killall mysqld

You can always verify the status of the process if it is running or not, using any of the below command.

# service mysql status
# pgrep mysql
# ps -aux | grep mysql

That's all for now, from my side. I will soon be here again with another Interesting and Informative topic. Till Then, stay tuned, connected to Tecmint and healthy. Don't forget to give your valuable feedback in comment section.

[Jul 29, 2019] Locate Command in Linux

Jul 25, 2019 | linuxize.com

... ... ...

The locate command also accepts patterns containing globbing characters such as the wildcard character * . When the pattern contains no globbing characters the command searches for *PATTERN* , that's why in the previous example all files containing the search pattern in their names were displayed.

The wildcard is a symbol used to represent zero, one or more characters. For example, to search for all .md files on the system you would use:

locate *.md

To limit the search results use the -n option followed by the number of results you want to be displayed. For example, the following command will search for all .py files and display only 10 results:

locate -n 10 *.py

By default, locate performs case-sensitive searches. The -i ( --ignore-case ) option tels locate to ignore case and run case-insensitive search.

locate -i readme.md
/home/linuxize/p1/readme.md
/home/linuxize/p2/README.md
/home/linuxize/p3/ReadMe.md

To display the count of all matching entries, use the -c ( --count ) option. The following command would return the number of all files containing .bashrc in their names:

locate -c .bashrc
6

By default, locate doesn't check whether the found files still exist on the file system. If you deleted a file after the latest database update if the file matches the search pattern it will be included in the search results.

To display only the names of the files that exist at the time locate is run use the -e ( --existing ) option. For example, the following would return only the existing .json files:

locate -e *.json

If you need to run a more complex search you can use the -r ( --regexp ) option which allows you to search using a basic regexp instead of patterns. This option can be specified multiple times.
For example, to search for all .mp4 and .avi files on your system and ignore case you would run:

locate --regex -i "(\.mp4|\.avi)"

[Jul 29, 2019] How do I tar a directory of files and folders without including the directory itself - Stack Overflow

Jan 05, 2017 | stackoverflow.com

How do I tar a directory of files and folders without including the directory itself? Ask Question Asked 10 years, 1 month ago Active 8 months ago Viewed 464k times 348 105


tvanfosson ,Jan 5, 2017 at 12:29

I typically do:
tar -czvf my_directory.tar.gz my_directory

What if I just want to include everything (including any hidden system files) in my_directory, but not the directory itself? I don't want:

my_directory
   --- my_file
   --- my_file
   --- my_file

I want:

my_file
my_file
my_file

PanCrit ,Feb 19 at 13:04

cd my_directory/ && tar -zcvf ../my_dir.tgz . && cd -

should do the job in one line. It works well for hidden files as well. "*" doesn't expand hidden files by path name expansion at least in bash. Below is my experiment:

$ mkdir my_directory
$ touch my_directory/file1
$ touch my_directory/file2
$ touch my_directory/.hiddenfile1
$ touch my_directory/.hiddenfile2
$ cd my_directory/ && tar -zcvf ../my_dir.tgz . && cd ..
./
./file1
./file2
./.hiddenfile1
./.hiddenfile2
$ tar ztf my_dir.tgz
./
./file1
./file2
./.hiddenfile1
./.hiddenfile2

JCotton ,Mar 3, 2015 at 2:46

Use the -C switch of tar:
tar -czvf my_directory.tar.gz -C my_directory .

The -C my_directory tells tar to change the current directory to my_directory , and then . means "add the entire current directory" (including hidden files and sub-directories).

Make sure you do -C my_directory before you do . or else you'll get the files in the current directory.

Digger ,Mar 23 at 6:52

You can also create archive as usual and extract it with:
tar --strip-components 1 -xvf my_directory.tar.gz

jwg ,Mar 8, 2017 at 12:56

Have a look at --transform / --xform , it gives you the opportunity to massage the file name as the file is added to the archive:
% mkdir my_directory
% touch my_directory/file1
% touch my_directory/file2
% touch my_directory/.hiddenfile1
% touch my_directory/.hiddenfile2
% tar -v -c -f my_dir.tgz --xform='s,my_directory/,,' $(find my_directory -type f)
my_directory/file2
my_directory/.hiddenfile1
my_directory/.hiddenfile2
my_directory/file1
% tar -t -f my_dir.tgz 
file2
.hiddenfile1
.hiddenfile2
file1

Transform expression is similar to that of sed , and we can use separators other than / ( , in the above example).
https://www.gnu.org/software/tar/manual/html_section/tar_52.html

Alex ,Mar 31, 2017 at 15:40

TL;DR
find /my/dir/ -printf "%P\n" | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

With some conditions (archive only files, dirs and symlinks):

find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -
Explanation

The below unfortunately includes a parent directory ./ in the archive:

tar -czf mydir.tgz -C /my/dir .

You can move all the files out of that directory by using the --transform configuration option, but that doesn't get rid of the . directory itself. It becomes increasingly difficult to tame the command.

You could use $(find ...) to add a file list to the command (like in magnus' answer ), but that potentially causes a "file list too long" error. The best way is to combine it with tar's -T option, like this:

find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

Basically what it does is list all files ( -type f ), links ( -type l ) and subdirectories ( -type d ) under your directory, make all filenames relative using -printf "%P\n" , and then pass that to the tar command (it takes filenames from STDIN using -T - ). The -C option is needed so tar knows where the files with relative names are located. The --no-recursion flag is so that tar doesn't recurse into folders it is told to archive (causing duplicate files).

If you need to do something special with filenames (filtering, following symlinks etc), the find command is pretty powerful, and you can test it by just removing the tar part of the above command:

$ find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d
> textfile.txt
> documentation.pdf
> subfolder2
> subfolder
> subfolder/.gitignore

For example if you want to filter PDF files, add ! -name '*.pdf'

$ find /my/dir/ -printf "%P\n" -type f ! -name '*.pdf' -o -type l -o -type d
> textfile.txt
> subfolder2
> subfolder
> subfolder/.gitignore
Non-GNU find

The command uses printf (available in GNU find ) which tells find to print its results with relative paths. However, if you don't have GNU find , this works to make the paths relative (removes parents with sed ):

find /my/dir/ -type f -o -type l -o -type d | sed s,^/my/dir/,, | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

BrainStone ,Dec 21, 2016 at 22:14

This Answer should work in most situations. Notice however how the filenames are stored in the tar file as, for example, ./file1 rather than just file1 . I found that this caused problems when using this method to manipulate tarballs used as package files in BuildRoot .

One solution is to use some Bash globs to list all files except for .. like this:

tar -C my_dir -zcvf my_dir.tar.gz .[^.]* ..?* *

This is a trick I learnt from this answer .

Now tar will return an error if there are no files matching ..?* or .[^.]* , but it will still work. If the error is a problem (you are checking for success in a script), this works:

shopt -s nullglob
tar -C my_dir -zcvf my_dir.tar.gz .[^.]* ..?* *
shopt -u nullglob

Though now we are messing with shell options, we might decide that it is neater to have * match hidden files:

shopt -s dotglob
tar -C my_dir -zcvf my_dir.tar.gz *
shopt -u dotglob

This might not work where your shell globs * in the current directory, so alternatively, use:

shopt -s dotglob
cd my_dir
tar -zcvf ../my_dir.tar.gz *
cd ..
shopt -u dotglob

PanCrit ,Jun 14, 2010 at 6:47

cd my_directory
tar zcvf ../my_directory.tar.gz *

anion ,May 11, 2018 at 14:10

If it's a Unix/Linux system, and you care about hidden files (which will be missed by *), you need to do:
cd my_directory
tar zcvf ../my_directory.tar.gz * .??*

I don't know what hidden files look like under Windows.

gpz500 ,Feb 27, 2014 at 10:46

I would propose the following Bash function (first argument is the path to the dir, second argument is the basename of resulting archive):
function tar_dir_contents ()
{
    local DIRPATH="$1"
    local TARARCH="$2.tar.gz"
    local ORGIFS="$IFS"
    IFS=$'\n'
    tar -C "$DIRPATH" -czf "$TARARCH" $( ls -a "$DIRPATH" | grep -v '\(^\.$\)\|\(^\.\.$\)' )
    IFS="$ORGIFS"
}

You can run it in this way:

$ tar_dir_contents /path/to/some/dir my_archive

and it will generate the archive my_archive.tar.gz within current directory. It works with hidden (.*) elements and with elements with spaces in their filename.

med ,Feb 9, 2017 at 17:19

cd my_directory && tar -czvf ../my_directory.tar.gz $(ls -A) && cd ..

This one worked for me and it's include all hidden files without putting all files in a root directory named "." like in tomoe's answer :

Breno Salgado ,Apr 16, 2016 at 15:42

Use pax.

Pax is a deprecated package but does the job perfectly and in a simple fashion.

pax -w > mydir.tar mydir

asynts ,Jun 26 at 16:40

Simplest way I found:

cd my_dir && tar -czvf ../my_dir.tar.gz *

marcingo ,Aug 23, 2016 at 18:04

# tar all files within and deeper in a given directory
# with no prefixes ( neither <directory>/ nor ./ )
# parameters: <source directory> <target archive file>
function tar_all_in_dir {
    { cd "$1" && find -type f -print0; } \
    | cut --zero-terminated --characters=3- \
    | tar --create --file="$2" --directory="$1" --null --files-from=-
}

Safely handles filenames with spaces or other unusual characters. You can optionally add a -name '*.sql' or similar filter to the find command to limit the files included.

user1456599 ,Feb 13, 2013 at 21:37

 tar -cvzf  tarlearn.tar.gz --remove-files mytemp/*

If the folder is mytemp then if you apply the above it will zip and remove all the files in the folder but leave it alone

 tar -cvzf  tarlearn.tar.gz --remove-files --exclude='*12_2008*' --no-recursion mytemp/*

You can give exclude patterns and also specify not to look into subfolders too

Aaron Digulla ,Jun 2, 2009 at 15:33

tar -C my_dir -zcvf my_dir.tar.gz `ls my_dir`

[Jul 28, 2019] command line - How do I extract a specific file from a tar archive - Ask Ubuntu

Jul 28, 2019 | askubuntu.com

CMCDragonkai, Jun 3, 2016 at 13:04

1. Using the Command-line tar

Yes, just give the full stored path of the file after the tarball name.

Example: suppose you want file etc/apt/sources.list from etc.tar :

tar -xf etc.tar etc/apt/sources.list

Will extract sources.list and create directories etc/apt under the current directory.

2. Extract it with the Archive Manager

Open the tar in Archive Manager from Nautilus, go down into the folder hierarchy to find the file you need, and extract it.

3. Using Nautilus/Archive-Mounter

Right-click the tar in Nautilus, and select Open with ArchiveMounter.

The tar will now appear similar to a removable drive on the left, and you can explore/navigate it like a normal drive and drag/copy/paste any file(s) you need to any destination.

[Jul 28, 2019] iso - midnight commander rules for accessing archives through VFS - Unix Linux Stack Exchange

Jul 28, 2019 | unix.stackexchange.com

,

Midnight Commander uses virtual filesystem ( VFS ) for displying files, such as contents of a .tar.gz archive, or of .iso image. This is configured in mc.ext with rules such as this one ( Open is Enter , View is F3 ):
regex/\.([iI][sS][oO])$
    Open=%cd %p/iso9660://
    View=%view{ascii} isoinfo -d -i %f

When I press Enter on an .iso file, mc will open the .iso and I can browse individual files. This is very useful.

Now my question: I have also files which are disk images, i.e. created with pv /dev/sda1 > sda1.img

I would like mc to "browse" the files inside these images in the same fashion as .iso .

Is this possible ? How would such rule look like ?

[Jul 28, 2019] Find files in tar archives and extract specific files from tar archives - Raymii.org

Jul 28, 2019 | raymii.org

Find files in tar archives and extract specific files from tar archives

Published: 17-10-2018 | Author: Remy van Elst | Text only version of this article


Table of Contents
This is a small tip, to find specific files in tar archives and how to extract those specific files from said archive. Usefull when you have a 2 GB large tar file with millions of small files, and you need just one.

If you like this article, consider sponsoring me by trying out a Digital Ocean VPS. With this link you'll get $100 credit for 60 days). (referral link)

Finding files in tar archives

Using the command line flags -ft (long flags are --file --list ) we can list the contents of an archive. Using grep you can search that list for the correct file. Example:

tar -ft large_file.tar.gz | grep "the-filename-you-want"

Output:

"full-path/to-the-file/in-the-archive/the-filename-you-want"

With a modern tar on modern linux you can omit the flags for compressed archives and just pass a .tar.gz or .tar.bz2 file directly.

Extracting one file from a tar archive

When extracting a tar archive, you can specify the filename of the file you want (full path, use the command above to find it), as the second command line option. Example:

tar -xf large_file.tar.gz "full-path/to-the-file/in-the-archive/the-filename-you-want"

It might just take a long time, at least for my 2 GB file it took a while.

An alternative is to use "mc" (midnight commander), which can open archive files just a a local folder.

Tags: archive , bash , grep , shell , snippets , tar

[Jul 28, 2019] How to Use Midnight Commander, a Visual File Manager

Jul 28, 2019 | www.linode.com
  1. Another tool that can save you time is Midnight Commander's user menu. Go back to /tmp/test where you created nine files. Press F2 and bring up the user menu. Select Compress the current subdirectory (tar.gz) . After you choose the name for the archive, this will be created in /tmp (one level up from the directory being compressed). If you highlight the .tar.gz file and press ENTER you'll notice it will open like a regular directory. This allows you to browse archives and extract files by simply copying them ( F5 ) to the opposite panel's working directory.

    Midnight Commander User Menu

  2. To find out the size of a directory (actually, the size of all the files it contains), highlight the directory and then press CTRL+SPACE .
  3. To search, go up in your directory tree until you reach the top level, / , called root directory. Now press F9 , then c , followed by f . After the Find File dialog opens, type *.gz . This will find any accessible gzip archive on the system. In the results dialog, press l (L) for Panelize . All the results will be fed to one of your panels so you can easily browse, copy, view and so on. If you enter a directory from that list, you lose the list of found files, but you can easily return to it with F9 , l (L) then z (to select Panelize from the Left menu).

    Midnight Commander - Find File Dialog

  4. Managing files is not always done locally. Midnight Commander also supports accessing remote filesystems through SSH's Secure File Transfer Protocol, SFTP . This way you can easily transfer files between servers.

    Press F9 , followed by l (L), then select the SFTP link menu entry. In the dialog box titled SFTP to machine enter sftp://example@203.0.113.0 . Replace example with the username you have created on the remote machine and 203.0.113.1 with the IP address of your server. This will work only if the server at the other end accepts password logins. If you're logging in with SSH keys, then you'll first need to create and/or edit ~/.ssh/config . It could look something like this:

    ~/.ssh/config
    1
    2
    3
    4
    5
    
    Host sftp_server
        HostName 203.0.113.1
        Port 22
        User your_user
        IdentityFile ~/.ssh/id_rsa
    

    You can choose whatever you want as the Host value, it's only an identifier. IdentityFile is the path to your private SSH key.

    After the config file is setup, access your SFTP server by typing the identifier value you set after Host in the SFTP to machine dialog. In this example, enter sftp_server .

[Jul 28, 2019] Bartosz Kosarzycki's blog Midnight Commander how to compress a file-directory; Make a tar archive with midnight commander

Jul 28, 2019 | kosiara87.blogspot.com

Midnight Commander how to compress a file/directory; Make a tar archive with midnight commander

To compress a file in Midnight Commader (e.g. to make a tar.gz archive) navigate to the directory you want to pack and press 'F2'. This will bring up the 'User menu'. Choose the option 'Compress the current subdirectory'. This will compress the WHOLE directory you're currently in - not the highlighted directory.

[Jul 26, 2019] Sort Command in Linux [10 Useful Examples] by Christopher Murray

Notable quotes:
"... The sort command option "k" specifies a field, not a column. ..."
"... In gnu sort, the default field separator is 'blank to non-blank transition' which is a good default to separate columns. ..."
"... What is probably missing in that article is a short warning about the effect of the current locale. It is a common mistake to assume that the default behavior is to sort according ASCII texts according to the ASCII codes. ..."
Jul 12, 2019 | linuxhandbook.com
5. Sort by months [option -M]

Sort also has built in functionality to arrange by month. It recognizes several formats based on locale-specific information. I tried to demonstrate some unqiue tests to show that it will arrange by date-day, but not year. Month abbreviations display before full-names.

Here is the sample text file in this example:

March
Feb
February
April
August
July
June
November
October
December
May
September
1
4
3
6
01/05/19
01/10/19
02/06/18

Let's sort it by months using the -M option:

sort filename.txt -M

Here's the output you'll see:

01/05/19
01/10/19
02/06/18
1
3
4
6
Jan
Feb
February
March
April
May
June
July
August
September
October
November
December

... ... ...

7. Sort Specific Column [option -k]

If you have a table in your file, you can use the -k option to specify which column to sort. I added some arbitrary numbers as a third column and will display the output sorted by each column. I've included several examples to show the variety of output possible. Options are added following the column number.

1. MX Linux 100
2. Manjaro 400
3. Mint 300
4. elementary 500
5. Ubuntu 200

sort filename.txt -k 2

This will sort the text on the second column in alphabetical order:

4. elementary 500
2. Manjaro 400
3. Mint 300
1. MX Linux 100
5. Ubuntu 200
sort filename.txt -k 3n

This will sort the text by the numerals on the third column.

1. MX Linux 100
5. Ubuntu 200
3. Mint 300
2. Manjaro 400
4. elementary 500
sort filename.txt -k 3nr

Same as the above command just that the sort order has been reversed.

4. elementary 500
2. Manjaro 400
3. Mint 300
5. Ubuntu 200
1. MX Linux 100
8. Sort and remove duplicates [option -u]

If you have a file with potential duplicates, the -u option will make your life much easier. Remember that sort will not make changes to your original data file. I chose to create a new file with just the items that are duplicates. Below you'll see the input and then the contents of each file after the command is run.

READ Learn to Use CURL Command in Linux With These Examples

1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu
1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu
1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu

sort filename.txt -u > filename_duplicates.txt

Here's the output files sorted and without duplicates.

1. MX Linux 
2. Manjaro 
3. Mint 
4. elementary 
5. Ubuntu
9. Ignore case while sorting [option -f]

Many modern distros running sort will implement ignore case by default. If yours does not, adding the -f option will produce the expected results.

sort filename.txt -f

Here's the output where cases are ignored by the sort command:

alpha
alPHa
Alpha
ALpha
beta
Beta
BEta
BETA
10. Sort by human numeric values [option -h]

This option allows the comparison of alphanumeric values like 1k (i.e. 1000).

sort filename.txt -h

Here's the sorted output:

10.0
100
1000.0
1k

I hope this tutorial helped you get the basic usage of the sort command in Linux. If you have some cool sort trick, why not share it with us in the comment section?

Christopher works as a Software Developer in Orlando, FL. He loves open source, Taco Bell, and a Chi-weenie named Max. Visit his website for more information or connect with him on social media.

John
The sort command option "k" specifies a field, not a column. In your example all five lines have the same character in column 2 – a "."

Stephane Chauveau

In gnu sort, the default field separator is 'blank to non-blank transition' which is a good default to separate columns. In his example, the "." is part of the first column so it should work fine. If –debug is used then the range of characters used as keys is dumped.

What is probably missing in that article is a short warning about the effect of the current locale. It is a common mistake to assume that the default behavior is to sort according ASCII texts according to the ASCII codes. For example, the command echo `printf ".nxn0nXn@në" | sort` produces ". 0 @ X x ë" with LC_ALL=C but ". @ 0 ë x X" with LC_ALL=en_US.UTF-8.

[Jul 26, 2019] How To Check Swap Usage Size and Utilization in Linux by Vivek Gite

Jul 26, 2019 | www.cyberciti.biz

The procedure to check swap space usage and size in Linux is as follows:

  1. Open a terminal application.
  2. To see swap size in Linux, type the command: swapon -s .
  3. You can also refer to the /proc/swaps file to see swap areas in use on Linux.
  4. Type free -m to see both your ram and your swap space usage in Linux.
  5. Finally, one can use the top or htop command to look for swap space Utilization on Linux too.
How to Check Swap Space in Linux using /proc/swaps file

Type the following cat command to see total and used swap size:
# cat /proc/swaps
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0

Another option is to type the grep command as follows:
grep Swap /proc/meminfo

SwapCached:            0 kB
SwapTotal:        524284 kB
SwapFree:         524284 kB
Look for swap space in Linux using swapon command

Type the following command to show swap usage summary by device
# swapon -s
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0
Use free command to monitor swap space usage

Use the free command as follows:
# free -g
# free -k
# free -m

Sample outputs:

             total       used       free     shared    buffers     cached
Mem:         11909      11645        264          0        324       8980
-/+ buffers/cache:       2341       9568
Swap:         6143         64       6079
See swap size in Linux using vmstat command

Type the following vmstat command:
# vmstat
# vmstat 1 5

... ... ...

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

[Jul 26, 2019] Cheat.sh Shows Cheat Sheets On The Command Line Or In Your Code Editor>

The choice of shell as a programming language is strange, but the idea is good...
Notable quotes:
"... The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget. ..."
Jul 26, 2019 | www.linuxuprising.com

While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism to access well developed and maintained cheat sheet repositories.

The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget.

It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising readers who are not aware of this cool gem.

cheat.sh features & more
cheat.sh tar example
cheat.sh major features:

The command line client features a special shell mode with a persistent queries context and readline support. It also has a query history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth mode I mentioned in the cheat.sh features.

The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/ but if you prefer, you can self-host it .

It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments, and so on). You can view a feature comparison of each cheat.sh editor plugin on the Editors integration section of the project's GitHub page.

Want to contribute a cheat sheet? See the cheat.sh guide on editing or adding a new cheat sheet.

Interested in bookmarking commands instead? You may want to give Marker, a command bookmark manager for the console , a try.

cheat.sh curl / command line client usage examples
Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command line:

Show the tar command cheat sheet:

curl cheat.sh/tar

Example with output:
$ curl cheat.sh/tar
# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar

# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/

# To extract a .gz archive:
tar -xzvf /path/to/foo.tgz

# To create a .gz archive:
tar -czvf /path/to/foo.tgz /path/to/foo/

# To list the content of an .gz archive:
tar -ztvf /path/to/foo.tgz

# To extract a .bz2 archive:
tar -xjvf /path/to/foo.tgz

# To create a .bz2 archive:
tar -cjvf /path/to/foo.tgz /path/to/foo/

# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/

# To list the content of an .bz2 archive:
tar -jtvf /path/to/foo.tgz

# To create a .gz archive and exclude all jpg,gif,... from the tgz
tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/

# To use parallel (multi-threaded) implementation of compression algorithms:
tar -z ... -> tar -Ipigz ...
tar -j ... -> tar -Ipbzip2 ...
tar -J ... -> tar -Ipixz ...

cht.sh also works instead of cheat.sh:
curl cht.sh/tar

Want to search for a keyword in all cheat sheets? Use:
curl cheat.sh/~keyword

List the Python programming language cheat sheet for random list :
curl cht.sh/python/random+list

Example with output:
$ curl cht.sh/python/random+list
#  python - How to randomly select an item from a list?
#  
#  Use random.choice
#  (https://docs.python.org/2/library/random.htmlrandom.choice):

import random

foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))

#  For cryptographically secure random choices (e.g. for generating a
#  passphrase from a wordlist), use random.SystemRandom
#  (https://docs.python.org/2/library/random.htmlrandom.SystemRandom)
#  class:

import random

foo = ['battery', 'correct', 'horse', 'staple']
secure_random = random.SystemRandom()
print(secure_random.choice(foo))

#  [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]

Replace python with some other programming language supported by cheat.sh, and random+list with the cheat sheet you want to show.

Want to eliminate the comments from your answer? Add ?Q at the end of the query (below is an example using the same /python/random+list):

$ curl cht.sh/python/random+list?Q
import random

foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))

import random

foo = ['battery', 'correct', 'horse', 'staple']
secure_random = random.SystemRandom()
print(secure_random.choice(foo))

For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to install it further down this article. Examples of using the cht.sh command line client:

Show the tar command cheat sheet:

cht.sh tar

List the Python programming language cheat sheet for random list :
cht.sh python random list

There is no need to use quotes with multiple keywords.

You can start the cht.sh client in a special shell mode using:

cht.sh --shell

And then you can start typing your queries. Example:
$ cht.sh --shell
cht.sh> bash loop

If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that context. As an example, start it with the Bash context using:
cht.sh --shell bash

Example with output:
$ cht.sh --shell bash
cht.sh/bash> loop
...........
cht.sh/bash> switch case

Want to copy the previously listed answer to the clipboard? Type c , then press Enter to copy the whole answer, or type C and press Enter to copy it without comments.

Type help in the cht.sh interactive shell mode to see all available commands. Also look under the Usage section from the cheat.sh GitHub project page for more options and advanced usage.

How to install cht.sh command line client
You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned. The steps below are for installing this cht.sh command line client.

If you'd rather install a code editor plugin for cheat.sh, see the Editors integration page.

1. Install dependencies.

To install the cht.sh command line client, the curl command line tool will be used, so this needs to be installed on your system. Another dependency is rlwrap , which is required by the cht.sh special shell mode. Install these dependencies as follows.

sudo apt install curl rlwrap

sudo dnf install curl rlwrap

sudo pacman -S curl rlwrap

sudo zypper install curl rlwrap

The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list, just install the curl and rlwrap packages using your distro's package manager.

2. Download and install the cht.sh command line interface.

You can install this either for your user only (so only you can run it), or for all users:

curl https://cht.sh/:cht.sh > ~/.bin/cht.sh

chmod +x ~/.bin/cht.sh

curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh

sudo chmod +x /usr/local/bin/cht.sh

If the first command appears to have frozen displaying only the cURL output, press the Enter key and you'll be prompted to enter your password in order to save the file to /usr/local/bin .

You may also download and install the cheat.sh command completion for Bash or Zsh:

mkdir ~/.bash.d

curl https://cheat.sh/:bash_completion > ~/.bash.d/cht.sh

echo ". ~/.bash.d/cht.sh" >> ~/.bashrc

mkdir ~/.zsh.d

curl https://cheat.sh/:zsh > ~/.zsh.d/_cht

echo 'fpath=(~/.zsh.d/ $fpath)' >> ~/.zshrc

Opening a new shell / terminal and it will load the cheat.sh completion.

[Jul 26, 2019] What Is /dev/null in Linux by Alexandru Andrei

Images removed...
Jul 23, 2019 | www.maketecheasier.com
... ... ...

In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."

In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.

Related : How to Use the Tee Command in Linux

stdout and stder

A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.

By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you can redirect stdout to a physical device, say, a digital LED or LCD display.

A full article about pipes and redirections is available if you want to learn more.

Related : 12 Useful Linux Commands for New User

Use /dev/null to Get Rid of Output You Don't Need

Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.

grep -r power /sys/

There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.

These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."

grep -r power /sys/ 2>/dev/null

As you can see, this is much easier to read.

In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.

ping google.com 1>/dev/null

The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.

You can redirect both stdout and stderr to two different locations.

ping google.com 1>/dev/null 2>error.log

In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.

Redirect All Output to /dev/null

Sometimes it's useful to get rid of all output. There are two ways to do this.

grep -r power /sys/ >/dev/null 2>&1

The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."

What's important to note here is that the order is important. If you reverse the redirect parameters like this:

grep -r power /sys/ 2>&1 >/dev/null

it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:

grep -r power /sys/ &>/dev/null

In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."

Other Examples Where It Can Be Useful to Redirect to /dev/null

Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies the location of the input file to be read; of= specifies the name of the output file, where to write.

dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=direct

In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."

wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.iso
Conclusion

Hopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."

Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!

[Jul 26, 2019] How to check open ports in Linux using the CLI> by Vivek Gite

Jul 26, 2019 | www.cyberciti.biz

Using netstat to list open ports

Type the following netstat command
sudo netstat -tulpn | grep LISTEN

... ... ...

For example, TCP port 631 opened by cupsd process and cupsd only listing on the loopback address (127.0.0.1). Similarly, TCP port 22 opened by sshd process and sshd listing on all IP address for ssh connections:

Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name 
tcp   0      0      127.0.0.1:631           0.0.0.0:*               LISTEN      0          43385      1821/cupsd  
tcp   0      0      0.0.0.0:22              0.0.0.0:*               LISTEN      0          44064      1823/sshd

Where,

Use ss to list open ports

The ss command is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. The syntax is:
sudo ss -tulpn

... ... ...

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

[Jul 26, 2019] The day the virtual machine manager died by Nathan Lager

"Dangerous" commands like dd should probably be always typed first in the editor and only when you verity that you did not make a blunder , executed...
A good decision was to go home and think the situation over, not to aggravate it with impulsive attempts to correct the situation, which typically only make it worse.
Lack of checking of the health of backups suggest that this guy is an arrogant sucker, despite his 20 years of sysadmin experience.
Notable quotes:
"... I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! ..."
"... Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. ..."
Jul 26, 2019 | www.redhat.com

... ... ...

See, my RHEV manager was a VM running on a stand-alone Kernel-based Virtual Machine (KVM) host, separate from the cluster it manages. I had been running RHEV since version 3.0, before hosted engines were a thing, and I hadn't gone through the effort of migrating. I was already in the process of building a new set of clusters with a new manager, but this older manager was still controlling most of our production VMs. It had filled its disk again, and the underlying database had stopped itself to avoid corruption.

See, for whatever reason, we had never set up disk space monitoring on this system. It's not like it was an important box, right?

So, I logged into the KVM host that ran the VM, and started the well-known procedure of creating a new empty disk file, and then attaching it via virsh . The procedure goes something like this: Become root , use dd to write a stream of zeros to a new file, of the proper size, in the proper location, then use virsh to attach the new disk to the already running VM. Then, of course, log into the VM and do your disk expansion.

I logged in, ran sudo -i , and started my work. I ran cd /var/lib/libvirt/images , ran ls -l to find the existing disk images, and then started carefully crafting my dd command:

dd ... bs=1k count=40000000 if=/dev/zero ... of=./vmname-disk ...

Which was the next disk again? <Tab> of=vmname-disk2.img <Back arrow, Back arrow, Back arrow, Back arrow, Backspace> Don't want to dd over the existing disk, that'd be bad. Let's change that 2 to a 3 , and Enter . OH CRAP, I CHANGED THE 2 TO A 2 NOT A 3 ! <Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C>

I still get sick thinking about this. I'd done the stupidest thing I possibly could have done, I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! (The kind that's at work late, trying to get this one little thing done before he heads off to see his friend. The kind that thinks he knows better, and thought he was careful enough to not make such a newbie mistake. Gah.)

So, how fast does dd start writing zeros? Faster than I can move my fingers from the Enter key to the Ctrl+C keys. I tried a number of things to recover the running disk from memory, but all I did was make things worse, I think. The system was still up, but still broken like it was before I touched it, so it was useless.

Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. The next day I owned up to the boss and co-workers pretty much the moment I walked in the door. We started taking an inventory of what we had, and what was lost. I had taken the precaution of setting up backups ages ago. So, we thought we had that to fall back on.

I opened a ticket with Red Hat support and filled them in on how dumb I'd been. I can only imagine the reaction of the support person when they read my ticket. I worked a help desk for years, I know how this usually goes. They probably gathered their closest coworkers to mourn for my loss, or get some entertainment out of the guy who'd been so foolish. (I say this in jest. Red Hat's support was awesome through this whole ordeal, and I'll tell you how soon. )

So, I figured the next thing I would need from my broken server, which was still running, was the backups I'd diligently been collecting. They were on the VM but on a separate virtual disk, so I figured they were safe. The disk I'd overwritten was the last disk I'd made to expand the volume the database was on, so that logical volume was toast, but I've always set up my servers such that the main mounts -- / , /var , /home , /tmp , and /root -- were all separate logical volumes.

In this case, /backup was an entirely separate virtual disk. So, I scp -r 'd the entire /backup mount to my laptop. It copied, and I felt a little sigh of relief. All of my production systems were still running, and I had my backup. My hope was that these factors would mean a relatively simple recovery: Build a new VM, install RHEV-M, and restore my backup. Simple right?

By now, my boss had involved the rest of the directors, and let them know that we were looking down the barrel of a possibly bad time. We started organizing a team meeting to discuss how we were going to get through this. I returned to my desk and looked through the backups I had copied from the broken server. All the files were there, but they were tiny. Like, a couple hundred kilobytes each, instead of the hundreds of megabytes or even gigabytes that they should have been.

Happy feeling, gone.

Turns out, my backups were running, but at some point after an RHEV upgrade, the database backup utility had changed. Remember how I said this system had existed since version 3.0? Well, 3.0 didn't have an engine-backup utility, so in my RHEV training, we'd learned how to make our own. Mine broke when the tools changed, and for who knows how long, it had been getting an incomplete backup -- just some files from /etc .

No database. Ohhhh ... Fudge. (I didn't say "Fudge.")

I updated my support case with the bad news and started wondering what it would take to break through one of these 4th-floor windows right next to my desk. (Ok, not really.)

At this point, we basically had three RHEV clusters with no manager. One of those was for development work, but the other two were all production. We started using these team meetings to discuss how to recover from this mess. I don't know what the rest of my team was thinking about me, but I can say that everyone was surprisingly supportive and un-accusatory. I mean, with one typo I'd thrown off the entire department. Projects were put on hold and workflows were disrupted, but at least we had time: We couldn't reboot machines, we couldn't change configurations, and couldn't get to VM consoles, but at least everything was still up and operating.

Red Hat support had escalated my SNAFU to an RHEV engineer, a guy I'd worked with in the past. I don't know if he remembered me, but I remembered him, and he came through yet again. About a week in, for some unknown reason (we never figured out why), our Windows VMs started dropping offline. They were still running as far as we could tell, but they dropped off the network, Just boom. Offline. In the course of a workday, we lost about a dozen windows systems. All of our RHEL machines were working fine, so it was just some Windows machines, and not even every Windows machine -- about a dozen of them.

Well great, how could this get worse? Oh right, add a ticking time bomb. Why were the Windows servers dropping off? Would they all eventually drop off? Would the RHEL systems eventually drop off? I made a panicked call back to support, emailed my account rep, and called in every favor I'd ever collected from contacts I had within Red Hat to get help as quickly as possible.

I ended up on a conference call with two support engineers, and we got to work. After about 30 minutes on the phone, we'd worked out the most insane recovery method. We had the newer RHEV manager I mentioned earlier, that was still up and running, and had two new clusters attached to it. Our recovery goal was to get all of our workloads moved from the broken clusters to these two new clusters.

Want to know how we ended up doing it? Well, as our Windows VMs were dropping like flies, the engineers and I came up with this plan. My clusters used a Fibre Channel Storage Area Network (SAN) as their storage domains. We took a machine that was not in use, but had a Fibre Channel host bus adapter (HBA) in it, and attached the logical unit numbers (LUNs) for both the old cluster's storage domains and the new cluster's storage domains to it. The plan there was to make a new VM on the new clusters, attach blank disks of the proper size to the new VM, and then use dd (the irony is not lost on me) to block-for-block copy the old broken VM disk over to the newly created empty VM disk.

I don't know if you've ever delved deeply into an RHEV storage domain, but under the covers it's all Logical Volume Manager (LVM). The problem is, the LV's aren't human-readable. They're just universally-unique identifiers (UUIDs) that the RHEV manager's database links from VM to disk. These VMs are running, but we don't have the database to reference. So how do you get this data?

virsh ...

Luckily, I managed KVM and Xen clusters long before RHEV was a thing that was viable. I was no stranger to libvirt 's virsh utility. With the proper authentication -- which the engineers gave to me -- I was able to virsh dumpxml on a source VM while it was running, get all the info I needed about its memory, disk, CPUs, and even MAC address, and then create an empty clone of it on the new clusters.

Once I felt everything was perfect, I would shut down the VM on the broken cluster with either virsh shutdown , or by logging into the VM and shutting it down. The catch here is that if I missed something and shut down that VM, there was no way I'd be able to power it back on. Once the data was no longer in memory, the config would be completely lost, since that information is all in the database -- and I'd hosed that. Once I had everything, I'd log into my migration host (the one that was connected to both storage domains) and use dd to copy, bit-for-bit, the source storage domain disk over to the destination storage domain disk. Talk about nerve-wracking, but it worked! We picked one of the broken windows VMs and followed this process, and within about half an hour we'd completed all of the steps and brought it back online.

We did hit one snag, though. See, we'd used snapshots here and there. RHEV snapshots are lvm snapshots. Consolidating them without the RHEV manager was a bit of a chore, and took even more leg work and research before we could dd the disks. I had to mimic the snapshot tree by creating symbolic links in the right places, and then start the dd process. I worked that one out late that evening after the engineers were off, probably enjoying time with their families. They asked me to write the process up in detail later. I suspect that it turned into some internal Red Hat documentation, never to be given to a customer because of the chance of royally hosing your storage domain.

Somehow, over the course of 3 months and probably a dozen scheduled maintenance windows, I managed to migrate every single VM (of about 100 VMs) from the old zombie clusters to the working clusters. This migration included our Zimbra collaboration system (10 VMs in itself), our file servers (another dozen VMs), our Enterprise Resource Planning (ERP) platform, and even Oracle databases.

We didn't lose a single VM and had no more unplanned outages. The Red Hat Enterprise Linux (RHEL) systems, and even some Windows systems, never fell to the mysterious drop-off that those dozen or so Windows servers did early on. During this ordeal, though, I had trouble sleeping. I was stressed out and felt so guilty for creating all this work for my co-workers, I even had trouble eating. No exaggeration, I lost 10lbs.

So, don't be like Nate. Monitor your important systems, check your backups, and for all that's holy, double-check your dd output file. That way, you won't have drama, and can truly enjoy Sysadmin Appreciation Day!

Nathan Lager is an experienced sysadmin, with 20 years in the industry. He runs his own blog at undrground.org, and hosts the Iron Sysadmin Podcast. More about me

[Jul 13, 2019] >Articles on Linux by Ken Hess

Jul 13, 2019 | www.linuxtoday.com

Hardening Linux for Production Use (Jul 12, 2019)

Quick and Dirty MySQL Performance Troubleshooting (May 09, 2019)

[Jun 26, 2019] The Individual Costs of Occupational Decline

Jun 26, 2019 | www.nakedcapitalism.com

Yves here. You have to read a bit into this article on occupational decline, aka, "What happens to me after the robots take my job?" to realize that the authors studied Swedish workers. One has to think that the findings would be more pronounced in the US, due both to pronounced regional and urban/rural variations, as well as the weakness of social institutions in the US. While there may be small cities in Sweden that have been hit hard by the decline of a key employer, I don't have the impression that Sweden has areas that have suffered the way our Rust Belt has. Similarly, in the US, a significant amount of hiring starts with resume reviews with the job requirements overspecified because the employer intends to hire someone who has done the same job somewhere else and hence needs no training (which in practice is an illusion; how companies do things is always idiosyncratic and new hires face a learning curve). On top of that, many positions are filled via personal networks, not formal recruiting. Some studies have concluded that having a large network of weak ties is more helpful in landing a new post than fewer close connections. It's easier to know a lot of people casually in a society with strong community institutions.

The article does not provide much in the way of remedies; it hints at "let them eat training" when programs have proven to be ineffective. One approach would be aggressive enforcement of laws against age discrimination. And even though some readers dislike a Job Guarantee, not only would it enable people who wanted to work to keep working, but private sector employers are particularly loath to employ someone who has been out of work for more than six months, so a Job Guarantee post would also help keep someone who'd lost a job from looking like damaged goods.

By Per-Anders Edin, Professor of Industrial Relations, Uppsala University; Tiernan Evans, Economics MRes/PhD Candidate, LSE; Georg Graetz, Assistant Professor in the Department of Economics, Uppsala University; Sofia Hernnäs, PhD student, Department of Economics, Uppsala University; Guy Michaels,Associate Professor in the Department of Economics, LSE. Originally published at VoxEU

As new technologies replace human labour in a growing number of tasks, employment in some occupations invariably falls. This column compares outcomes for similar workers in similar occupations over 28 years to explore the consequences of large declines in occupational employment for workers' careers. While mean losses in earnings and employment for those initially working in occupations that later declined are relatively moderate, low-earners lose significantly more.

How costly is it for workers when demand for their occupation declines? As new technologies replace human labour in a growing number of tasks, employment in some occupations invariably falls. Until recently, technological change mostly automated routine production and clerical work (Autor et al. 2003). But machines' capabilities are expanding, as recent developments include self-driving vehicles and software that outperforms professionals in some tasks. Debates on the labour market implications of these new technologies are ongoing (e.g. Brynjolfsson and McAfee 2014, Acemoglu and Restrepo 2018). But in these debates, it is important to ask not only "Will robots take my job?", but also "What would happen to my career if robots took my job?"

Much is at stake. Occupational decline may hurt workers and their families, and may also have broader consequences for economic inequality, education, taxation, and redistribution. If it exacerbates differences in outcomes between economic winners and losers, populist forces may gain further momentum (Dal Bo et al. 2019).

In a new paper (Edin et al. 2019) we explore the consequences of large declines in occupational employment for workers' careers. We assemble a dataset with forecasts of occupational employment changes that allow us to identify unanticipated declines, population-level administrative data spanning several decades, and a highly detailed occupational classification. These data allow us to compare outcomes for similar workers who perform similar tasks and have similar expectations of future occupational employment trajectories, but experience different actual occupational changes.

Our approach is distinct from previous work that contrasts career outcomes of routine and non-routine workers (e.g. Cortes 2016), since we compare workers who perform similar tasks and whose careers would likely have followed similar paths were it not for occupational decline. Our work is also distinct from studies of mass layoffs (e.g. Jacobson et al. 1993), since workers who experience occupational decline may take action before losing their jobs.

In our analysis, we follow individual workers' careers for almost 30 years, and we find that workers in declining occupations lose on average 2-5% of cumulative earnings, compared to other similar workers. Workers with low initial earnings (relative to others in their occupations) lose more – about 8-11% of mean cumulative earnings. These earnings losses reflect both lost years of employment and lower earnings conditional on employment; some of the employment losses are due to increased time spent in unemployment and retraining, and low earners spend more time in both unemployment and retraining.

Estimating the Consequences of Occupational Decline

We begin by assembling data from the Occupational Outlook Handbooks (OOH), published by the US Bureau of Labor Statistics, which cover more than 400 occupations. In our main analysis we define occupations as declining if their employment fell by at least 25% from 1984-2016, although we show that our results are robust to using other cutoffs. The OOH also provides information on technological change affecting each occupation, and forecasts of employment over time. Using these data, we can separate technologically driven declines, and also unanticipated declines. Occupations that declined include typesetters, drafters, proof readers, and various machine operators.

We then match the OOH data to detailed Swedish occupations. This allows us to study the consequences of occupational decline for workers who, in 1985, worked in occupations that declined over the subsequent decades. We verify that occupations that declined in the US also declined in Sweden, and that the employment forecasts that the BLS made for the US have predictive power for employment changes in Sweden.

Detailed administrative micro-data, which cover all Swedish workers, allow us to address two potential concerns for identifying the consequences of occupational decline: that workers in declining occupations may have differed from other workers, and that declining occupations may have differed even in absence of occupational decline. To address the first concern, about individual sorting, we control for gender, age, education, and location, as well as 1985 earnings. Once we control for these characteristics, we find that workers in declining occupations were no different from others in terms of their cognitive and non-cognitive test scores and their parents' schooling and earnings. To address the second concern, about occupational differences, we control for occupational earnings profiles (calculated using the 1985 data), the BLS forecasts, and other occupational and industry characteristics.

Assessing the losses and how their incidence varied

We find that prime age workers (those aged 25-36 in 1985) who were exposed to occupational decline lost about 2-6 months of employment over 28 years, compared to similar workers whose occupations did not decline. The higher end of the range refers to our comparison between similar workers, while the lower end of the range compares similar workers in similar occupations. The employment loss corresponds to around 1-2% of mean cumulative employment. The corresponding earnings losses were larger, and amounted to around 2-5% of mean cumulative earnings. These mean losses may seem moderate given the large occupational declines, but the average outcomes do not tell the full story. The bottom third of earners in each occupation fared worse, losing around 8-11% of mean earnings when their occupations declined.

The earnings and employment losses that we document reflect increased time spent in unemployment and government-sponsored retraining – more so for workers with low initial earnings. We also find that older workers who faced occupational decline retired a little earlier.

We also find that workers in occupations that declined after 1985 were less likely to remain in their starting occupation. It is quite likely that this reduced supply to declining occupations contributed to mitigating the losses of the workers that remained there.

We show that our main findings are essentially unchanged when we restrict our analysis to technology-related occupational declines.

Further, our finding that mean earnings and employment losses from occupational decline are small is not unique to Sweden. We find similar results using a smaller panel dataset on US workers, using the National Longitudinal Survey of Youth 1979.

Theoretical implications

Our paper also considers the implications of our findings for Roy's (1951) model, which is a workhorse model for labour economists. We show that the frictionless Roy model predicts that losses are increasing in initial occupational earnings rank, under a wide variety of assumptions about the skill distribution. This prediction is inconsistent with our finding that the largest earnings losses from occupational decline are incurred by those who earned the least. To reconcile our findings, we add frictions to the model: we assume that workers who earn little in one occupation incur larger time costs searching for jobs or retraining if they try to move occupations. This extension of the model, especially when coupled with the addition of involuntary job displacement, allows us to reconcile several of our empirical findings.

Conclusions

There is a vivid academic and public debate on whether we should fear the takeover of human jobs by machines. New technologies may replace not only factory and office workers but also drivers and some professional occupations. Our paper compares similar workers in similar occupations over 28 years. We show that although mean losses in earnings and employment for those initially working in occupations that later declined are relatively moderate (2-5% of earnings and 1-2% of employment), low-earners lose significantly more.

The losses that we find from occupational decline are smaller than those suffered by workers who experience mass layoffs, as reported in the existing literature. Because the occupational decline we study took years or even decades, its costs for individual workers were likely mitigated through retirements, reduced entry into declining occupations, and increased job-to-job exits to other occupations. Compared to large, sudden shocks, such as plant closures, the decline we study may also have a less pronounced impact on local economies.

While the losses we find are on average moderate, there are several reasons why future occupational decline may have adverse impacts. First, while we study unanticipated declines, the declines were nevertheless fairly gradual. Costs may be larger for sudden shocks following, for example, a quick evolution of machine learning. Second, the occupational decline that we study mainly affected low- and middle-skilled occupations, which require less human capital investment than those that may be impacted in the future. Finally, and perhaps most importantly, our findings show that low-earning individuals are already suffering considerable (pre-tax) earnings losses, even in Sweden, where institutions are geared towards mitigating those losses and facilitating occupational transitions. Helping these workers stay productive when they face occupational decline remains an important challenge for governments.

Please see original post for references

[Jun 26, 2019] >Linux Package Managers Compared - AppImage vs Snap vs Flatpak

Jun 26, 2019 | www.ostechnix.com

by editor · Published June 24, 2019 · Updated June 24, 2019

Package managers provide a way of packaging, distributing, installing, and maintaining apps in an operating system. With modern desktop, server and IoT applications of the Linux operating system and the hundreds of different distros that exist, it becomes necessary to move away from platform specific packaging methods to platform agnostic ones. This post explores 3 such tools, namely AppImage , Snap and Flatpak , that each aim to be the future of software deployment and management in Linux. At the end we summarize a few key findings.

1. AppImage

AppImage follows a concept called "One app = one file" . This is to be understood as an AppImage being a regular independent "file" containing one application with everything it needs to run in the said file. Once made executable, the AppImage can be run like any application in a computer by simply double-clicking it in the users file system.[1]

It is a format for creating portable software for Linux without requiring the user to install the said application. The format allows the original developers of the software (upstream developers) to create a platform and distribution independent (also called a distribution-agnostic binary) version of their application that will basically run on any flavor of Linux.

AppImage has been around for a long time. Klik , a predecessor of AppImage was created by Simon Peter in 2004. The project was shut down in 2011 after not having passed the beta stage. A project named PortableLinuxApps was created by Simon around the same time and the format was picked up by a few portals offering software for Linux users. The project was renamed again in 2013 to its current name AppImage and a repository has been maintained in GitHub (project link ) with all the latest changes to the same since 2018.[2][3]

Written primarily in C and donning the MIT license since 2013, AppImage is currently developed by The AppImage project . It is a very convenient way to use applications as demonstrated by the following features:

  1. AppImages can run on virtually any Linux system. As mentioned before applications derive a lot of functionality from the operating system and a few common libraries. This is a common practice in the software world since if something is already done, there is no point in doing it again if you can pick and choose which parts from the same to use. The problem is that many Linux distros might not have all the files a particular application requires to run since it is left to the developers of that particular distro to include the necessary packages. Hence developers need to separately include the dependencies of the application for each Linux distro they are publishing their app for. Using the AppImage format developers can choose to include all the libraries and files that they cannot possibly hope the target operating system to have as part of the AppImage file. Hence the same AppImage format file can work on different operating systems and machines without needing granular control.
  2. The one app one file philosophy means that user experience is simple and elegant in that users need only download and execute one file that will serve their needs for using the application.
  3. No requirement of root access . System administrators will require people to have root access to stop them from messing with computers and their default setup. This also means that people with no root access or super user privileges cannot install the apps they need as they please. The practice is common in a public setting (such as library or university computers or on enterprise systems). The AppImage file does not require users to "install" anything and hence users need only download the said file and make it executable to start using it. This removes the access dilemmas that system administrators have and makes their job easier without sacrificing user experience.
  4. No effect on core operating system . The AppImage-application format allows using applications with their full functionality without needing to change or even access most system files. Meaning whatever the applications do, the core operating system setup and files remain untouched.
  5. An AppImage can be made by a developer for a particular version of their application. Any updated version is made as a different AppImage. Hence users if need be can test multiple versions of the same application by running different instances using different AppImages. This is an invaluable feature when you need to test your applications from an end-user POV to notice differences.
  6. Take your applications where you go. As mentioned previously AppImages are archived files of all the files that an application requires and can be used without installing or even bothering about the distribution the system uses. Hence if you have a set of apps that you use regularly you may even mount a few AppImage files on a thumb drive and take it with you to use on multiple computers running multiple different distros without worrying whether they'll work or not.

Furthermore, the AppImageKit allows users from all backgrounds to build their own AppImages from applications they already have or for applications that are not provided an AppImage by their upstream developer.

The package manager is platform independent but focuses primarily on software distribution to end users on their desktops with a dedicated daemon AppImaged for integrating the AppImage formats into respective desktop environments. AppImage is supported natively now by a variety of distros such as Ubuntu, Debian, openSUSE, CentOS, Fedora etc. and others may set it up as per their needs. AppImages can also be run on servers with limited functionality via the CLI tools included.

To know more about AppImages, go to the official AppImage documentation page.


Suggested read:


2. Snappy

Snappy is a software deployment and package management system like AppImage or any other package manager for that instance. It is originally designed for the now defunct Ubuntu Touch Operating system. Snappy lets developers create software packages for use in a variety of Linux based distributions. The initial intention behind creating Snappy and deploying "snaps" on Ubuntu based systems is to obtain a unified single format that could be used in everything from IoT devices to full-fledged computer systems that ran some version of Ubuntu and in a larger sense Linux itself.[4]

The lead developer behind the project is Canonical , the same company that pilots the Ubuntu project. Ubuntu had native snap support from version 16.04 LTS with more and more distros supporting it out of the box or via a simple setup these days. If you use Arch or Debian or openSUSE you'll find it easy to install support for the package manager using simple commands in the terminal as explained later in this section. This is also made possible by making the necessary snap platform files available on the respective repos.[5]

Snappy has the following important components that make up the entire package manager system.[6]

The snapd component is written primarily in C and Golang whereas the Snapcraft framework is built using Python . Although both the modules use the GPLv3 license it is to be noted that snapd has proprietary code from Canonical for its server-side operations with just the client side being published under the GPL license. This is a major point of contention with developers since this involves developers signing a CLA form to participate in snap development.[7]

Going deeper into the finer details of the Snappy package manager the following may be noted:

  1. Snaps as noted before are all inclusive and contain all the necessary files (dependencies) that the application needs to run. Hence, developers need not to make different snaps for the different distros that they target. Being mindful of the runtimes is all that's necessary if base runtimes are excluded from the snap.
  2. Snappy packages are meant to support transactional updates. Such a transactional update is atomic and fully reversible, meaning you can use the application while its being updated and that if an update does not behave the way its supposed to, you can reverse the same with no other effects whatsoever. The concept is also called as delta programming in which only changes to the application are transmitted as an update instead of the whole package. An Ubuntu derivative called Ubuntu Core actually promises the snappy update protocol to the OS itself.[8]
  3. A key point of difference between snaps and AppImages, is how they handle version differences. Using AppImages different versions of the application will have different AppImages allowing you to concurrently use 2 or more different versions of the same application at the same time. However, using snaps means conforming to the transactional or delta update system. While this means faster updates, it keeps you from running two instances of the same application at the same time. If you need to use the old version of an app you'll need to reverse or uninstall the new version. Snappy does support a feature called "parallel install" which will let users accomplish similar goals, however, it is still in an experimental stage and cannot be considered to be a stable implementation. Snappy also makes use of channels meaning you can use the beta or the nightly build of an app and the stable version at the same time.[9]
  4. Extensive support from major Linux distros and major developers including Google, Mozilla, Microsoft, etc.[4]
  5. Snapd the desktop integration tool supports taking "snapshots" of the current state of all the installed snaps in the system. This will let users save the current configuration state of all the applications that are installed via the Snappy package manager and let users revert to that state whenever they desire so. The same feature can also be set to automatically take snapshots at a frequency deemed necessary by the user. Snapshots can be created using the snap save command in the snapd framework.[10]
  6. Snaps are designed to be sandboxed during operation. This provides a much-required layer of security and isolation to users. Users need not worry about snap-based applications messing with the rest of the software on their computer. Sandboxing is implemented using three levels of isolation viz, classic , strict and devmode . Each level of isolation allows the app different levels of access within the file system and computer.[11]

On the flip side of things, snaps are widely criticized for being centered around Canonical's modus operandi . Most of the commits to the project are by Canonical employees or contractors and other contributors are required to sign a release form (CLA). The sandboxing feature, a very important one indeed from a security standpoint, is flawed in that the sandboxing actually requires certain other core services to run (such as Mir) while applications running the X11 desktop won't support the said isolation, hence making the said security feature irrelevant. Questionable press releases and other marketing efforts from Canonical and the "central" and closed app repository are also widely criticized aspects of Snappy. Furthermore, the file sizes of the different snaps are also comparatively very large compared to the app sizes of the packages made using AppImage.[7]

For more details, check Snap official documentation .


Related read:


3. Flatpak

Like the Snap/Snappy listed above, Flatpak is also a software deployment tool that aims to ease software distribution and use in Linux. Flatpak was previously known as "xdg-app" and was based on concept proposed by Lennart Poettering in 2004. The idea was to contain applications in a secure virtual sandbox allowing for using applications without the need of root privileges and without compromising on the systems security. Alex started tinkering with Klik (thought to be a former version of AppImage) and wanted to implement the concept better. Alexander Larsson who at the time was working with Red Hat wrote an implementation called xdg-app in 2015 that acted as a pre-cursor to the current Flatpak format.

Flatpak officially came out in 2016 with backing from Red Hat, Endless Computers and Collabora. Flathub is the official repository of all Flatpak application packages. At its surface Flatpak like the other is a framework for building and packaging distribution agnostic applications for Linux. It simply requires the developers to conform to a few desktop environment guidelines in order for the application to be successfully integrated into the Flatpak environment.

Targeted primarily at the three popular desktop implementations FreeDesktop , KDE , and GNOME , the Flatpak framework itself is written in C and works on a LGPL license. The maintenance repository can be accessed via the GitHub link here .

A few features of Flatpak that make it stand apart are mentioned below. Notice that features Flatpak shares with AppImage and Snappy are omitted here.

One of the most criticized aspects of Flatpak however is it's the sandbox feature itself. Sandboxing is how package managers such as Snappy and Flatpak implement important security features. Sandboxing essentially isolates the application from everything else in the system only allowing for user defined exchange of information from within the sandbox to outside. The flaw with the concept being that the sandbox cannot be inherently impregnable. Data has to be eventually transferred between the two domains and simple Linux commands can simply get rid of the sandbox restriction meaning that malicious applications might potentially jump out of the said sandbox.[15]

This combined with the worse than expected commitment to rolling out security updates for Flatpak has resulted in widespread criticism of the team's tall claim of providing a secure framework. The blog (named flatkill ) linked at the end of this guide in fact mentions a couple of exploits that were not addressed by the Flatpak team as soon as they should've been.[15]

For more details, I suggest you to read Flatpak official documentation .


Related read:


AppImage vs Snap vs Flatpak

The table attached below summarizes all the above findings into a concise and technical comparison of the three frameworks.

Feature AppImage Snappy Flatpak
Unique feature
Not an appstore or repository, its simply put a packaging format for software distribution. Led by Canonical (Same company as Ubuntu), features central app repository and active contribution from Canonical. Features an app store called FlatHub, however, individuals may still host packages and distribute it.
Target system Desktops and Servers. Desktops, Servers, IoT devices, Embedded devices etc. Desktops and limited function on servers.
Libraries/Dependencies Base system. Runtimes optional, Libraries and other dependencies packaged. Base system or via Plugins or can be packaged. GNOME, KDE, Freedesktop bundled or custom bundled.
Developers Community Driven led by Simon Peter. Corporate driven by Canonical Ltd. Community driven by flatpak team supported by enterprise.
Written in C. Golang, C and Python. C.
Initial release 2004. 2014. 2015.
Sandboxing Can be implemented. 3 modes – strict, classic, and devmode with varying confinement capabilities. Runs in isolation. Isolated but Uses system files to run applications by default.
Sandboxing Platform Firejail, AppArmor, Bubblewrap. AppArmor. Bubblewrap.
App Installation Not necessary. Will act as self mounted disc. Installation using snapd. Installed using flatpak client tools.
App Execution Can be run after setting executing bit. Using desktop integrated snap tools. Runs isolated with user defined resources. Needs to be executed using flatpak command if CLI is used.
User Privileges Can be run w/o root user access. Can be run w/o root user access. Selectively required.
Hosting Applications Can be hosted anywhere by anybody. Has to be hosted with Canonical servers which are proprietary. Can be hosted anywhere by anybody.
Portable Execution from non system locations Yes. No. Yes, after flatpak client is configured.
Central Repository AppImageHub. Snap Store. Flathub.
Running multiple versions of the app Possible, any number of versions simultaneously. One version of the app in one channel. Has to be separately configured for more. Yes.
Updating applications Using CLI command AppImageUpdate or via an updater tool built into the AppImage. Requires snapd installed. Supports delta updating, will automatically update. Required flatpak installed. Update Using flatpak update command.
Package sizes on disk Application remains archived. Application remains archived. Client side is uncompressed.

Here is a long tabular comparison of AppImage vs. Snap vs. Flatpak features. Please note that the comparison is made from an AppImage perspective.

Conclusion

While all three of these platforms have a lot in common with each other and aim to be platform agnostic in approach, they offer different levels of competencies in a few areas. While Snaps can run on a variety of devices including embedded ones, AppImages and Flatpaks are built with the desktop user in mind. AppImages of popular applications on the other had have superior packaging sizes and portability whereas Flatpak really shines with its forward compatibility when its used in a set it and forget it system.

If there are any flaws in this guide, please let us know in the comment section below. We will update the guide accordingly.

References:

[Jun 23, 2019] Utilizing multi core for tar+gzip-bzip compression-decompression

Highly recommended!
Notable quotes:
"... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
"... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
Jun 23, 2019 | stackoverflow.com

user1118764 , Sep 7, 2012 at 6:58

I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

Is there any way I can utilize the unused cores to make it faster?

Warren Severin , Nov 13, 2017 at 4:37

The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

Mark Adler , Sep 7, 2012 at 14:48

You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
tar cf - paths-to-archive | pigz > archive.tar.gz

By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

user788171 , Feb 20, 2013 at 12:43

How do you use pigz to decompress in the same fashion? Or does it only work for compression?

Mark Adler , Feb 20, 2013 at 16:18

pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression.

The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores.

Garrett , Mar 1, 2014 at 7:26

The hyphen here is stdout (see this page ).

Mark Adler , Jul 2, 2014 at 21:29

Yes. 100% compatible in both directions.

Mark Adler , Apr 23, 2015 at 5:23

There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files.

Jen , Jun 14, 2013 at 14:34

You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

For example use:

tar -c --use-compress-program=pigz -f tar.file dir_to_zip

Valerio Schiavoni , Aug 5, 2014 at 22:38

Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

bovender , Sep 18, 2015 at 10:14

@ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

Valerio Schiavoni , Sep 28, 2015 at 23:41

On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

Offenso , Jan 11, 2017 at 17:26

I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

Maxim Suslov , Dec 18, 2014 at 7:31

Common approach

There is option for tar program:

-I, --use-compress-program PROG
      filter through PROG (must accept -d)

You can use multithread version of archiver or compressor utility.

Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
$ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

$ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
$ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz

Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

p7zip

For p7zip for compression you need a small shell script like the following:

#!/bin/sh
case $1 in
  -d) 7za -txz -si -so e;;
   *) 7za -txz -si -so a .;;
esac 2>/dev/null

Save it as 7zhelper.sh. Here the example of usage:

$ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
$ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
xz

Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

This is a fragment of man for 5.1.0alpha version:

Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

Recompiling with replacement

If you build tar from sources, then you can recompile with parameters

--with-gzip=pigz
--with-bzip2=lbzip2
--with-lzip=plzip

After recompiling tar with these options you can check the output of tar's help:

$ tar --help | grep "lbzip2\|plzip\|pigz"
  -j, --bzip2                filter the archive through lbzip2
      --lzip                 filter the archive through plzip
  -z, --gzip, --gunzip, --ungzip   filter the archive through pigz

mpibzip2 , Apr 28, 2015 at 20:57

I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

selurvedu , May 26, 2016 at 22:13

Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

panticz.de , Sep 1, 2014 at 15:02

You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/

einpoklum , Feb 11, 2017 at 15:59

A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59
If you want to have more flexibility with filenames and compression options, you can use:
find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
tar -P --transform='s@/my/path/@@g' -cf - {} + | \
pigz -9 -p 4 > myarchive.tar.gz
Step 1: find

find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

-exec will execute the next command using the results of find : tar

Step 2: tar

tar -P --transform='s@/my/path/@@g' -cf - {} +

--transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

-P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

-cf - tells tar to use the tarball name we'll specify later

{} + uses everyfiles that find found previously

Step 3: pigz

pigz -9 -p 4

Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

Step 4: archive name

> myarchive.tar.gz

Finally.

[Jun 23, 2019] Test with rsync between two partitions

Jun 23, 2019 | www.fsarchiver.org

An important test is done using rsync. It requires two partitions: the original one, and a spare partition where to restore the archive. It allows to know whether or not there are differences between the original and the restored filesystem. rsync is able to compare both the files contents, and files attributes (timestamps, permissions, owner, extended attributes, acl, ), so that's a very good test. The following command can be used to know whether or not files are the same (data and attributes) on two file-systems:

rsync -axHAXnP /mnt/part1/ /mnt/part2/

[Jun 22, 2019] Using SSH and Tmux for screen sharing Enable by Seth Kenlon Tmux

Jun 22, 2019 | www.redhat.com

Tmux is a screen multiplexer, meaning that it provides your terminal with virtual terminals, allowing you to switch from one virtual session to another. Modern terminal emulators feature a tabbed UI, making the use of Tmux seem redundant, but Tmux has a few peculiar features that still prove difficult to match without it.

First of all, you can launch Tmux on a remote machine, start a process running, detach from Tmux, and then log out. In a normal terminal, logging out would end the processes you started. Since those processes were started in Tmux, they persist even after you leave.

Secondly, Tmux can "mirror" its session on multiple screens. If two users log into the same Tmux session, then they both see the same output on their screens in real time.

Tmux is a lightweight, simple, and effective solution in cases where you're training someone remotely, debugging a command that isn't working for them, reviewing text, monitoring services or processes, or just avoiding the ten minutes it sometimes takes to read commands aloud over a phone clearly enough that your user is able to accurately type them.

To try this option out, you must have two computers. Assume one computer is owned by Alice, and the other by Bob. Alice remotely logs into Bob's PC and launches a Tmux session:

alice$ ssh bob.local
alice$ tmux

On his PC, Bob starts Tmux, attaching to the same session:

bob$ tmux attach

When Alice types, Bob sees what she is typing, and when Bob types, Alice sees what he's typing.

It's a simple but effective trick that enables interactive live sessions between computer users, but it is entirely text-based.

Collaboration

With these two applications, you have access to some powerful methods of supporting users. You can use these tools to manage systems remotely, as training tools, or as support tools, and in every case, it sure beats wandering around the office looking for somebody's desk. Get familiar with SSH and Tmux, and start using them today.

[Jun 20, 2019] Exploring run filesystem on Linux by Sandra Henry-Stocker

Jun 20, 2019 | www.networkworld.com

/run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.

$ ls /run/user
1000  1002  121

A long file listing will clarify the significance of these numbers.

$ ls -l
total 0
drwx------ 5 shs  shs  120 Jun 16 12:44 1000
drwx------ 5 dory dory 120 Jun 16 16:14 1002
drwx------ 8 gdm  gdm  220 Jun 14 12:18 121

This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.

The /run/user files represent only a very small portion of what you'll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.

$ ls *.pid
acpid.pid  atopacctd.pid  crond.pid  rsyslogd.pid
atd.pid    atop.pid       gdm3.pid   sshd.pid

As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).

[Jun 19, 2019] America s Suicide Epidemic

Highly recommended!
Notable quotes:
"... A suicide occurs in the United States roughly once every 12 minutes . What's more, after decades of decline, the rate of self-inflicted deaths per 100,000 people annually -- the suicide rate -- has been increasing sharply since the late 1990s. Suicides now claim two-and-a-half times as many lives in this country as do homicides , even though the murder rate gets so much more attention. ..."
"... In some states the upsurge was far higher: North Dakota (57.6%), New Hampshire (48.3%), Kansas (45%), Idaho (43%). ..."
"... Since 2008 , suicide has ranked 10th among the causes of death in this country. For Americans between the ages of 10 and 34, however, it comes in second; for those between 35 and 45, fourth. The United States also has the ninth-highest rate in the 38-country Organization for Economic Cooperation and Development. Globally , it ranks 27th. ..."
"... The rates in rural counties are almost double those in the most urbanized ones, which is why states like Idaho, Kansas, New Hampshire, and North Dakota sit atop the suicide list. Furthermore, a far higher percentage of people in rural states own guns than in cities and suburbs, leading to a higher rate of suicide involving firearms, the means used in half of all such acts in this country. ..."
"... Education is also a factor. The suicide rate is lowest among individuals with college degrees. Those who, at best, completed high school are, by comparison, twice as likely to kill themselves. Suicide rates also tend to be lower among people in higher-income brackets. ..."
"... Evidence from the United States , Brazil , Japan , and Sweden does indicate that, as income inequality increases, so does the suicide rate. ..."
"... One aspect of the suicide epidemic is puzzling. Though whites have fared far better economically (and in many other ways) than African Americans, their suicide rate is significantly higher . ..."
"... The higher suicide rate among whites as well as among people with only a high school diploma highlights suicide's disproportionate effect on working-class whites. This segment of the population also accounts for a disproportionate share of what economists Anne Case and Angus Deaton have labeled " deaths of despair " -- those caused by suicides plus opioid overdoses and liver diseases linked to alcohol abuse. Though it's hard to offer a complete explanation for this, economic hardship and its ripple effects do appear to matter. ..."
"... Trump has neglected his base on pretty much every issue; this one's no exception. ..."
Jun 19, 2019 | www.nakedcapitalism.com

Yves here. This post describes how the forces driving the US suicide surge started well before the Trump era, but explains how Trump has not only refused to acknowledge the problem, but has made matters worse.

However, it's not as if the Democrats are embracing this issue either.

BY Rajan Menon, the Anne and Bernard Spitzer Professor of International Relations at the Powell School, City College of New York, and Senior Research Fellow at Columbia University's Saltzman Institute of War and Peace Studies. His latest book is The Conceit of Humanitarian Intervention Originally published at TomDispatch .

We hear a lot about suicide when celebrities like Anthony Bourdain and Kate Spade die by their own hand. Otherwise, it seldom makes the headlines. That's odd given the magnitude of the problem.

In 2017, 47,173 Americans killed themselves. In that single year, in other words, the suicide count was nearly seven times greater than the number of American soldiers killed in the Afghanistan and Iraq wars between 2001 and 2018.

A suicide occurs in the United States roughly once every 12 minutes . What's more, after decades of decline, the rate of self-inflicted deaths per 100,000 people annually -- the suicide rate -- has been increasing sharply since the late 1990s. Suicides now claim two-and-a-half times as many lives in this country as do homicides , even though the murder rate gets so much more attention.

In other words, we're talking about a national epidemic of self-inflicted deaths.

Worrisome Numbers

Anyone who has lost a close relative or friend to suicide or has worked on a suicide hotline (as I have) knows that statistics transform the individual, the personal, and indeed the mysterious aspects of that violent act -- Why this person? Why now? Why in this manner? -- into depersonalized abstractions. Still, to grasp how serious the suicide epidemic has become, numbers are a necessity.

According to a 2018 Centers for Disease Control study , between 1999 and 2016, the suicide rate increased in every state in the union except Nevada, which already had a remarkably high rate. In 30 states, it jumped by 25% or more; in 17, by at least a third. Nationally, it increased 33% . In some states the upsurge was far higher: North Dakota (57.6%), New Hampshire (48.3%), Kansas (45%), Idaho (43%).

Alas, the news only gets grimmer.

Since 2008 , suicide has ranked 10th among the causes of death in this country. For Americans between the ages of 10 and 34, however, it comes in second; for those between 35 and 45, fourth. The United States also has the ninth-highest rate in the 38-country Organization for Economic Cooperation and Development. Globally , it ranks 27th.

More importantly, the trend in the United States doesn't align with what's happening elsewhere in the developed world. The World Health Organization, for instance, reports that Great Britain, Canada, and China all have notably lower suicide rates than the U.S., as do all but six countries in the European Union. (Japan's is only slightly lower.)

World Bank statistics show that, worldwide, the suicide rate fell from 12.8 per 100,000 in 2000 to 10.6 in 2016. It's been falling in China , Japan (where it has declined steadily for nearly a decade and is at its lowest point in 37 years), most of Europe, and even countries like South Korea and Russia that have a significantly higher suicide rate than the United States. In Russia, for instance, it has dropped by nearly 26% from a high point of 42 per 100,000 in 1994 to 31 in 2019.

We know a fair amount about the patterns of suicide in the United States. In 2017, the rate was highest for men between the ages of 45 and 64 (30 per 100,000) and those 75 and older (39.7 per 100,000).

The rates in rural counties are almost double those in the most urbanized ones, which is why states like Idaho, Kansas, New Hampshire, and North Dakota sit atop the suicide list. Furthermore, a far higher percentage of people in rural states own guns than in cities and suburbs, leading to a higher rate of suicide involving firearms, the means used in half of all such acts in this country.

There are gender-based differences as well. From 1999 to 2017, the rate for men was substantially higher than for women -- almost four-and-a-half times higher in the first of those years, slightly more than three-and-a-half times in the last.

Education is also a factor. The suicide rate is lowest among individuals with college degrees. Those who, at best, completed high school are, by comparison, twice as likely to kill themselves. Suicide rates also tend to be lower among people in higher-income brackets.

The Economics of Stress

This surge in the suicide rate has taken place in years during which the working class has experienced greater economic hardship and psychological stress. Increased competition from abroad and outsourcing, the results of globalization, have contributed to job loss, particularly in economic sectors like manufacturing, steel, and mining that had long been mainstays of employment for such workers. The jobs still available often paid less and provided fewer benefits.

Technological change, including computerization, robotics, and the coming of artificial intelligence, has similarly begun to displace labor in significant ways, leaving Americans without college degrees, especially those 50 and older, in far more difficult straits when it comes to finding new jobs that pay well. The lack of anything resembling an industrial policy of a sort that exists in Europe has made these dislocations even more painful for American workers, while a sharp decline in private-sector union membership -- down from nearly 17% in 1983 to 6.4% today -- has reduced their ability to press for higher wages through collective bargaining.

Furthermore, the inflation-adjusted median wage has barely budged over the last four decades (even as CEO salaries have soared). And a decline in worker productivity doesn't explain it: between 1973 and 2017 productivity increased by 77%, while a worker's average hourly wage only rose by 12.4%. Wage stagnation has made it harder for working-class Americans to get by, let alone have a lifestyle comparable to that of their parents or grandparents.

The gap in earnings between those at the top and bottom of American society has also increased -- a lot. Since 1979, the wages of Americans in the 10th percentile increased by a pitiful 1.2%. Those in the 50th percentile did a bit better, making a gain of 6%. By contrast, those in the 90th percentile increased by 34.3% and those near the peak of the wage pyramid -- the top 1% and especially the rarefied 0.1% -- made far more substantial gains.

And mind you, we're just talking about wages, not other forms of income like large stock dividends, expensive homes, or eyepopping inheritances. The share of net national wealth held by the richest 0.1% increased from 10% in the 1980s to 20% in 2016. By contrast, the share of the bottom 90% shrank in those same decades from about 35% to 20%. As for the top 1%, by 2016 its share had increased to almost 39% .

The precise relationship between economic inequality and suicide rates remains unclear, and suicide certainly can't simply be reduced to wealth disparities or financial stress. Still, strikingly, in contrast to the United States, suicide rates are noticeably lower and have been declining in Western European countries where income inequalities are far less pronounced, publicly funded healthcare is regarded as a right (not demonized as a pathway to serfdom), social safety nets far more extensive, and apprenticeships and worker retraining programs more widespread.

Evidence from the United States , Brazil , Japan , and Sweden does indicate that, as income inequality increases, so does the suicide rate. If so, the good news is that progressive economic policies -- should Democrats ever retake the White House and the Senate -- could make a positive difference. A study based on state-by-state variations in the U.S. found that simply boosting the minimum wage and Earned Income Tax Credit by 10% appreciably reduces the suicide rate among people without college degrees.

The Race Enigma

One aspect of the suicide epidemic is puzzling. Though whites have fared far better economically (and in many other ways) than African Americans, their suicide rate is significantly higher . It increased from 11.3 per 100,000 in 2000 to 15.85 per 100,000 in 2017; for African Americans in those years the rates were 5.52 per 100,000 and 6.61 per 100,000. Black men are 10 times more likely to be homicide victims than white men, but the latter are two-and-half times more likely to kill themselves.

The higher suicide rate among whites as well as among people with only a high school diploma highlights suicide's disproportionate effect on working-class whites. This segment of the population also accounts for a disproportionate share of what economists Anne Case and Angus Deaton have labeled " deaths of despair " -- those caused by suicides plus opioid overdoses and liver diseases linked to alcohol abuse. Though it's hard to offer a complete explanation for this, economic hardship and its ripple effects do appear to matter.

According to a study by the St. Louis Federal Reserve , the white working class accounted for 45% of all income earned in the United States in 1990, but only 27% in 2016. In those same years, its share of national wealth plummeted, from 45% to 22%. And as inflation-adjusted wages have decreased for men without college degrees, many white workers seem to have lost hope of success of any sort. Paradoxically, the sense of failure and the accompanying stress may be greater for white workers precisely because they traditionally were much better off economically than their African American and Hispanic counterparts.

In addition, the fraying of communities knit together by employment in once-robust factories and mines has increased social isolation among them, and the evidence that it -- along with opioid addiction and alcohol abuse -- increases the risk of suicide is strong . On top of that, a significantly higher proportion of whites than blacks and Hispanics own firearms, and suicide rates are markedly higher in states where gun ownership is more widespread.

Trump's Faux Populism

The large increase in suicide within the white working class began a couple of decades before Donald Trump's election. Still, it's reasonable to ask what he's tried to do about it, particularly since votes from these Americans helped propel him to the White House. In 2016, he received 64% of the votes of whites without college degrees; Hillary Clinton, only 28%. Nationwide, he beat Clinton in counties where deaths of despair rose significantly between 2000 and 2015.

White workers will remain crucial to Trump's chances of winning in 2020. Yet while he has spoken about, and initiated steps aimed at reducing, the high suicide rate among veterans , his speeches and tweets have never highlighted the national suicide epidemic or its inordinate impact on white workers. More importantly, to the extent that economic despair contributes to their high suicide rate, his policies will only make matters worse.

The real benefits from the December 2017 Tax Cuts and Jobs Act championed by the president and congressional Republicans flowed to those on the top steps of the economic ladder. By 2027, when the Act's provisions will run out, the wealthiest Americans are expected to have captured 81.8% of the gains. And that's not counting the windfall they received from recent changes in taxes on inheritances. Trump and the GOP doubled the annual amount exempt from estate taxes -- wealth bequeathed to heirs -- through 2025 from $5.6 million per individual to $11.2 million (or $22.4 million per couple). And who benefits most from this act of generosity? Not workers, that's for sure, but every household with an estate worth $22 million or more will.

As for job retraining provided by the Workforce Innovation and Opportunity Act, the president proposed cutting that program by 40% in his 2019 budget, later settling for keeping it at 2017 levels. Future cuts seem in the cards as long as Trump is in the White House. The Congressional Budget Office projects that his tax cuts alone will produce even bigger budget deficits in the years to come. (The shortfall last year was $779 billion and it is expected to reach $1 trillion by 2020.) Inevitably, the president and congressional Republicans will then demand additional reductions in spending for social programs.

This is all the more likely because Trump and those Republicans also slashed corporate taxes from 35% to 21% -- an estimated $1.4 trillion in savings for corporations over the next decade. And unlike the income tax cut, the corporate tax has no end date . The president assured his base that the big bucks those companies had stashed abroad would start flowing home and produce a wave of job creation -- all without adding to the deficit. As it happens, however, most of that repatriated cash has been used for corporate stock buy-backs, which totaled more than $800 billion last year. That, in turn, boosted share prices, but didn't exactly rain money down on workers. No surprise, of course, since the wealthiest 10% of Americans own at least 84% of all stocks and the bottom 60% have less than 2% of them.

And the president's corporate tax cut hasn't produced the tsunami of job-generating investments he predicted either. Indeed, in its aftermath, more than 80% of American companies stated that their plans for investment and hiring hadn't changed. As a result, the monthly increase in jobs has proven unremarkable compared to President Obama's second term, when the economic recovery that Trump largely inherited began. Yes, the economy did grow 2.3% in 2017 and 2.9% in 2018 (though not 3.1% as the president claimed). There wasn't, however, any "unprecedented economic boom -- a boom that has rarely been seen before" as he insisted in this year's State of the Union Address .

Anyway, what matters for workers struggling to get by is growth in real wages, and there's nothing to celebrate on that front: between 2017 and mid-2018 they actually declined by 1.63% for white workers and 2.5% for African Americans, while they rose for Hispanics by a measly 0.37%. And though Trump insists that his beloved tariff hikes are going to help workers, they will actually raise the prices of goods, hurting the working class and other low-income Americans the most .

Then there are the obstacles those susceptible to suicide face in receiving insurance-provided mental-health care. If you're a white worker without medical coverage or have a policy with a deductible and co-payments that are high and your income, while low, is too high to qualify for Medicaid, Trump and the GOP haven't done anything for you. Never mind the president's tweet proclaiming that "the Republican Party Will Become 'The Party of Healthcare!'"

Let me amend that: actually, they have done something. It's just not what you'd call helpful. The percentage of uninsured adults, which fell from 18% in 2013 to 10.9% at the end of 2016, thanks in no small measure to Obamacare , had risen to 13.7% by the end of last year.

The bottom line? On a problem that literally has life-and-death significance for a pivotal portion of his base, Trump has been AWOL. In fact, to the extent that economic strain contributes to the alarming suicide rate among white workers, his policies are only likely to exacerbate what is already a national crisis of epidemic proportions.


Seamus Padraig , June 19, 2019 at 6:46 am

Trump has neglected his base on pretty much every issue; this one's no exception.

DanB , June 19, 2019 at 8:55 am

Trump is running on the claim that he's turned the economy around; addressing suicide undermines this (false) claim. To state the obvious, NC readers know that Trump is incapable of caring about anyone or anything beyond his in-the-moment interpretation of his self-interest.

JCC , June 19, 2019 at 9:25 am

Not just Trump. Most of the Republican Party and much too many Democrats have also abandoned this base, otherwise known as working class Americans.

The economic facts are near staggering and this article has done a nice job of summarizing these numbers that are spread out across a lot of different sites.

I've experienced this rise within my own family and probably because of that fact I'm well aware that Trump is only a symptom of an entire political system that has all but abandoned it's core constituency, the American Working Class.

sparagmite , June 19, 2019 at 10:13 am

Yep It's not just Trump. The author mentions this, but still focuses on him for some reason. Maybe accurately attributing the problems to a failed system makes people feel more hopeless. Current nihilists in Congress make it their duty to destroy once helpful institutions in the name of "fiscal responsibility," i.e., tax cuts for corporate elites.

dcblogger , June 19, 2019 at 12:20 pm

Maybe because Trump is president and bears the greatest responsibility in this particular time. A great piece and appreciate all the documentation.

Svante , June 19, 2019 at 7:00 am

I'd assumed, the "working class" had dissappeared, back during Reagan's Miracle? We'd still see each other, sitting dazed on porches & stoops of rented old places they'd previously; trying to garden, fix their car while smoking, drinking or dazed on something? Those able to morph into "middle class" lives, might've earned substantially less, especially benefits and retirement package wise. But, a couple decades later, it was their turn, as machines and foreigners improved productivity. You could lease a truck to haul imported stuff your kids could sell to each other, or help robots in some warehouse, but those 80s burger flipping, rent-a-cop & repo-man gigs dried up. Your middle class pals unemployable, everybody in PayDay Loan debt (without any pay day in sight?) SHTF Bug-out bags® & EZ Credit Bushmasters began showing up at yard sales, even up North. Opioids became the religion of the proletariat Whites simply had much farther to fall, more equity for our betters to steal. And it was damned near impossible to get the cops to shoot you?

Man, this just ain't turning out as I'd hoped. Need coffee!

Svante , June 19, 2019 at 7:55 am

We especially love the euphemism "Deaths O' Despair." since it works so well on a Chyron, especially supered over obese crackers waddling in crusty MossyOak™ Snuggies®

https://mobile.twitter.com/BernieSanders/status/1140998287933300736
https://m.youtube.com/watch?v=apxZvpzq4Mw

DanB , June 19, 2019 at 9:29 am

This is a very good article, but I have a comment about the section titled, "The Race Enigma." I think the key to understanding why African Americans have a lower suicide rate lies in understanding the sociological notion of community, and the related concept Emil Durkheim called social solidarity. This sense of solidarity and community among African Americans stands in contrast to the "There is no such thing as society" neoliberal zeitgeist that in fact produces feelings of extreme isolation, failure, and self-recriminations. An aside: as a white boy growing up in 1950s-60s Detroit I learned that if you yearned for solidarity and community what you had to do was to hang out with black people.

Amfortas the hippie , June 19, 2019 at 2:18 pm

" if you yearned for solidarity and community what you had to do was to hang out with black people."
amen, to that. in my case rural black people.
and I'll add Hispanics to that.
My wife's extended Familia is so very different from mine.
Solidarity/Belonging is cool.
I recommend it.
on the article we keep the scanner on("local news").we had a 3-4 year rash of suicides and attempted suicides(determined by chisme, or deduction) out here.
all of them were despair related more than half correlated with meth addiction itself a despair related thing.
ours were equally male/female, and across both our color spectrum.
that leaves economics/opportunity/just being able to get by as the likely cause.

David B Harrison , June 19, 2019 at 10:05 am

What's left out here is the vast majority of these suicides are men.

Christy , June 19, 2019 at 1:53 pm

Actually, in the article it states:
"There are gender-based differences as well. From 1999 to 2017, the rate for men was substantially higher than for women -- almost four-and-a-half times higher in the first of those years, slightly more than three-and-a-half times in the last."

jrs , June 19, 2019 at 1:58 pm

which in some sense makes despair the wrong word, as females are actually quite a bit more likely to be depressed for instance, but much less likely to "do the deed". Despair if we mean a certain social context maybe, but not just a psychological state.

Ex-Pralite Monk , June 19, 2019 at 10:10 am

obese cracker

You lay off the racial slur "cracker" and I'll lay off the racial slur "nigger". Deal?

rd , June 19, 2019 at 10:53 am

Suicide deaths are a function of the suicide attempt rate and the efficacy of the method used. A unique aspect of the US is the prevalence of guns in the society and therefore the greatly increased usage of them in suicide attempts compared to other countries. Guns are a very efficient way of committing suicide with a very high "success" rate. As of 2010, half of US suicides were using a gun as opposed to other countries with much lower percentages. So if the US comes even close to other countries in suicide rates then the US will surpass them in deaths. https://en.wikipedia.org/wiki/Suicide_methods#Firearms

Now we can add in opiates, especially fentanyl, that can be quite effective as well.

The economic crisis hitting middle America over the past 30 years has been quite focused on the states and populations that also tend to have high gun ownership rates. So suicide attempts in those populations have a high probability of "success".

Joe Well , June 19, 2019 at 11:32 am

I would just take this opportunity to add that the police end up getting called in to prevent on lot of suicide attempts, and just about every successful one.

In the face of so much blanket demonization of the police, along with justified criticism, it's important to remember that.

B:H , June 19, 2019 at 11:44 am

As someone who works in the mental health treatment system, acute inpatient psychiatry to be specific, I can say that of the 25 inpatients currently here, 11 have been here before, multiple times. And this is because of several issues, in my experience: inadequate inpatient resources, staff burnout, inadequate support once they leave the hospital, and the nature of their illnesses. It's a grim picture here and it's been this way for YEARS. Until MAJOR money is spent on this issue it's not going to get better. This includes opening more facilities for people to live in long term, instead of closing them, which has been the trend I've seen.

B:H , June 19, 2019 at 11:53 am

One last thing the CEO wants "asses in beds", aka census, which is the money maker. There's less profit if people get better and don't return. And I guess I wouldn't have a job either. Hmmmm: sickness generates wealth.

[Jun 18, 2019] Introduction to Bash Shell Parameter Expansions

Jun 18, 2019 | linuxconfig.org

Before proceeding further, let me give you one tip. In the example above the shell tried to expand a non-existing variable, producing a blank result. This can be very dangerous, especially when working with path names, therefore, when writing scripts, it's always recommended to use the nounset option which causes the shell to exit with error whenever a non existing variable is referenced:

$ set -o nounset
$ echo "You are reading this article on $site_!"
bash: site_: unbound variable
Working with indirection

The use of the ${!parameter} syntax, adds a level of indirection to our parameter expansion. What does it mean? The parameter which the shell will try to expand is not parameter ; instead it will try to use the the value of parameter as the name of the variable to be expanded. Let's explain this with an example. We all know the HOME variable expands in the path of the user home directory in the system, right?

$ echo "${HOME}"
/home/egdoc

Very well, if now we assign the string "HOME", to another variable, and use this type of expansion, we obtain:

$ variable_to_inspect="HOME"
$ echo "${!variable_to_inspect}"
/home/egdoc

As you can see in the example above, instead of obtaining "HOME" as a result, as it would have happened if we performed a simple expansion, the shell used the value of variable_to_inspect as the name of the variable to expand, that's why we talk about a level of indirection.

Case modification expansion

This parameter expansion syntax let us change the case of the alphabetic characters inside the string resulting from the expansion of the parameter. Say we have a variable called name ; to capitalize the text returned by the expansion of the variable we would use the ${parameter^} syntax:

$ name="egidio"
$ echo "${name^}"
Egidio

What if we want to uppercase the entire string, instead of capitalize it? Easy! we use the ${parameter^^} syntax:

$ echo "${name^^}"
EGIDIO

Similarly, to lowercase the first character of a string, we use the ${parameter,} expansion syntax:

$ name="EGIDIO"
$ echo "${name,}"
eGIDIO

To lowercase the entire string, instead, we use the ${parameter,,} syntax:

$ name="EGIDIO"
$ echo "${name,,}"
egidio

In all cases a pattern to match a single character can also be provided. When the pattern is provided the operation is applied only to the parts of the original string that matches it:

$ name="EGIDIO"
$ echo "${name,,[DIO]}"
EGidio

me name=


In the example above we enclose the characters in square brackets: this causes anyone of them to be matched as a pattern.

When using the expansions we explained in this paragraph and the parameter is an array subscripted by @ or * , the operation is applied to all the elements contained in it:

$ my_array=(one two three)
$ echo "${my_array[@]^^}"
ONE TWO THREE

When the index of a specific element in the array is referenced, instead, the operation is applied only to it:

$ my_array=(one two three)
$ echo "${my_array[2]^^}"
THREE
Substring removal

The next syntax we will examine allows us to remove a pattern from the beginning or from the end of string resulting from the expansion of a parameter.

Remove matching pattern from the beginning of the string

The next syntax we will examine, ${parameter#pattern} , allows us to remove a pattern from the beginning of the string resulting from the parameter expansion:

$ name="Egidio"
$ echo "${name#Egi}"
dio

A similar result can be obtained by using the "${parameter##pattern}" syntax, but with one important difference: contrary to the one we used in the example above, which removes the shortest matching pattern from the beginning of the string, it removes the longest one. The difference is clearly visible when using the * character in the pattern :

$ name="Egidio Docile"
$ echo "${name#*i}"
dio Docile

In the example above we used * as part of the pattern that should be removed from the string resulting by the expansion of the name variable. This wildcard matches any character, so the pattern itself translates in "'i' character and everything before it". As we already said, when we use the ${parameter#pattern} syntax, the shortest matching pattern is removed, in this case it is "Egi". Let's see what happens when we use the "${parameter##pattern}" syntax instead:

$ name="Egidio Docile"
$ echo "${name##*i}"
le

This time the longest matching pattern is removed ("Egidio Doci"): the longest possible match includes the third 'i' and everything before it. The result of the expansion is just "le".

Remove matching pattern from the end of the string

The syntax we saw above remove the shortest or longest matching pattern from the beginning of the string. If we want the pattern to be removed from the end of the string, instead, we must use the ${parameter%pattern} or ${parameter%%pattern} expansions, to remove, respectively, the shortest and longest match from the end of the string:

$ name="Egidio Docile"
$ echo "${name%i*}"
Egidio Doc

In this example the pattern we provided roughly translates in "'i' character and everything after it starting from the end of the string". The shortest match is "ile", so what is returned is "Egidio Doc". If we try the same example but we use the syntax which removes the longest match we obtain:

$ name="Egidio Docile"
$ echo "${name%%i*}"
Eg

In this case the once the longest match is removed, what is returned is "Eg".

In all the expansions we saw above, if parameter is an array and it is subscripted with * or @ , the removal of the matching pattern is applied to all its elements:

$ my_array=(one two three)
$ echo "${my_array[@]#*o}"
ne three

me name=


Search and replace pattern

We used the previous syntax to remove a matching pattern from the beginning or from the end of the string resulting from the expansion of a parameter. What if we want to replace pattern with something else? We can use the ${parameter/pattern/string} or ${parameter//pattern/string} syntax. The former replaces only the first occurrence of the pattern, the latter all the occurrences:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase/yellow/red}"
red is the sun and yellow is the lemon

The parameter (phrase) is expanded, and the longest match of the pattern (yellow) is matched against it. The match is then replaced by the provided string (red). As you can observe only the first occurrence is replaced, so the lemon remains yellow! If we want to change all the occurrences of the pattern, we must prefix it with the / character:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase//yellow/red}"
red is the sun and red is the lemon

This time all the occurrences of "yellow" has been replaced by "red". As you can see the pattern is matched wherever it is found in the string resulting from the expansion of parameter . If we want to specify that it must be matched only at the beginning or at the end of the string, we must prefix it respectively with the # or % character.

Just like in the previous cases, if parameter is an array subscripted by either * or @ , the substitution happens in each one of its elements:

$ my_array=(one two three)
$ echo "${my_array[@]/o/u}"
une twu three
Substring expansion

The ${parameter:offset} and ${parameter:offset:length} expansions let us expand only a part of the parameter, returning a substring starting at the specified offset and length characters long. If the length is not specified the expansion proceeds until the end of the original string. This type of expansion is called substring expansion :

$ name="Egidio Docile"
$ echo "${name:3}"
dio Docile

In the example above we provided just the offset , without specifying the length , therefore the result of the expansion was the substring obtained by starting at the character specified by the offset (3).

If we specify a length, the substring will start at offset and will be length characters long:

$ echo "${name:3:3}"
dio

If the offset is negative, it is calculated from the end of the string. In this case an additional space must be added after : otherwise the shell will consider it as another type of expansion identified by :- which is used to provide a default value if the parameter to be expanded doesn't exist (we talked about it in the article about managing the expansion of empty or unset bash variables ):

$ echo "${name: -6}"
Docile

If the provided length is negative, instead of being interpreted as the total number of characters the resulting string should be long, it is considered as an offset to be calculated from the end of the string. The result of the expansion will therefore be a substring starting at offset and ending at length characters from the end of the original string:

$ echo "${name:7:-3}"
Doc

When using this expansion and parameter is an indexed array subscribed by * or @ , the offset is relative to the indexes of the array elements. For example:

$ my_array=(one two three)
$ echo "${my_array[@]:0:2}"
one two
$ echo "${my_array[@]: -2}"
two three

[Jun 17, 2019] Accessing remote desktops by Seth Kenlon

Jun 17, 2019 | www.redhat.com

Accessing remote desktops Need to see what's happening on someone else's screen? Here's what you need to know about accessing remote desktops.

Posted June 13, 2019 | by Seth Kenlon (Red Hat) Anyone who's worked a support desk has had the experience: sometimes, no matter how descriptive your instructions, and no matter how concise your commands, it's just easier and quicker for everyone involved to share screens. Likewise, anyone who's ever maintained a server located in a loud and chilly data center -- or across town, or the world -- knows that often a remote viewer is the easiest method for viewing distant screens.

Linux is famously capable of being managed without seeing a GUI, but that doesn't mean you have to manage your box that way. If you need to see the desktop of a computer that you're not physically in front of, there are plenty of tools for the job.

Barriers

Half the battle of successfully screen sharing is getting into the target computer. That's by design, of course. It should be difficult to get into a computer without explicit consent.

Usually, there are up to 3 blockades for accessing a remote machine:

  1. The network firewall
  2. The target computer's firewall
  3. Screen share settings

Specific instruction on how to get past each barrier is impossible. Every network and every computer is configured uniquely, but here are some possible solutions.

Barrier 1: The network firewall

A network firewall is the target computer's LAN entry point, often a part of the router (whether an appliance from an Internet Service Provider or a dedicated server in a rack). In order to pass through the firewall and access a computer remotely, your network firewall must be configured so that the appropriate port for the remote desktop protocol you're using is accessible.

The most common, and most universal, protocol for screen sharing is VNC.

If the network firewall is on a Linux server you can access, you can broadly allow VNC traffic to pass through using firewall-cmd , first by getting your active zone, and then by allowing VNC traffic in that zone:

$ sudo firewall-cmd --get-active-zones
example-zone
  interfaces: enp0s31f6
$ sudo firewall-cmd --add-service=vnc-server --zone=example-zone

If you're not comfortable allowing all VNC traffic into the network, add a rich rule to firewalld in order to let in VNC traffic from only your IP address. For example, using an example IP address of 93.184.216.34, a rule to allow VNC traffic is:

$ sudo firewall-cmd \
--add-rich-rule='rule family="ipv4" source address="93.184.216.34" service name=vnc-server accept'

To ensure the firewall changes were made, reload the rules:

$ sudo firewall-cmd --reload

If network reconfiguration isn't possible, see the section "Screen sharing through a browser."

Barrier 2: The computer's firewall

Most personal computers have built-in firewalls. Users who are mindful of security may actively manage their firewall. Others, though, blissfully trust their default settings. This means that when you're trying to access their computer for screen sharing, their firewall may block incoming remote connection requests without the user even realizing it. Your request to view their screen may successfully pass through the network firewall only to be silently dropped by the target computer's firewall.

Changing zones in Linux.

To remedy this problem, have the user either lower their firewall or, on Fedora and RHEL, place their computer into the trusted zone. Do this only for the duration of the screen sharing session. Alternatively, have them add either one of the rules you added to the network firewall (if your user is on Linux).

A reboot is a simple way to ensure the new firewall setting is instantiated, so that's probably the easiest next step for your user. Power users can instead reload the firewall rules manually :

$ sudo firewall-cmd --reload

If you have a user override their computer's default firewall, remember to close the session by instructing them to re-enable the default firewall zone. Don't leave the door open behind you!

Barrier 3: The computer's screen share settings

To share another computer's screen, the target computer must be running remote desktop software (technically, a remote desktop server , since this software listens to incoming requests). Otherwise, you have nothing to connect to.

Some desktops, like GNOME, provide screen sharing options, which means you don't have to launch a separate screen sharing application. To activate screen sharing in GNOME, open Settings and select Sharing from the left column. In the Sharing panel, click on Screen Sharing and toggle it on:

Remote desktop viewers

There are a number of remote desktop viewers out there. Here are some of the best options.

GNOME Remote Desktop Viewer

The GNOME Remote Desktop Viewer application is codenamed Vinagre . It's a simple application that supports multiple protocols, including VNC, Spice, RDP, and SSH. Vinagre's interface is intuitive, and yet this application offers many options, including whether you want to control the target computer or only view it.

If Vinagre's not already installed, use your distribution's package manager to add it. On Red Hat Enterprise Linux and Fedora , use:

$ sudo dnf install vinagre

In order to open Vinagre, go to the GNOME desktop's Activities menu and launch Remote Desktop Viewer . Once it opens, click the Connect button in the top left corner. In the Connect window that appears, select the VNC protocol. In the Host field, enter the IP address of the computer you're connecting to. If you want to use the computer's hostname instead, you must have a valid DNS service in place, or Avahi , or entries in /etc/hosts . Do not prepend your entry with a username.

Select any additional options you prefer, and then click Connect .

If you use the GNOME Remote Desktop Viewer as a full-screen application, move your mouse to the screen's top center to reveal additional controls. Most importantly, the exit fullscreen button.

If you're connecting to a Linux virtual machine, you can use the Spice protocol instead. Spice is robust, lightweight, and transmits both audio and video, usually with no noticeable lag.

TigerVNC and TightVNC

Sometimes you're not on a Linux machine, so the GNOME Remote Desktop Viewer isn't available. As usual, open source has an answer. In fact, open source has several answers, but two popular ones are TigerVNC and TightVNC , which are both cross-platform VNC viewers. TigerVNC offers separate downloads for each platform, while TightVNC has a universal Java client.

Both of these clients are simple, with additional options included in case you need them. The defaults are generally acceptable. In order for these particular clients to connect, turn off the encryption setting for GNOME's embedded VNC server (codenamed Vino) as follows:

$ gsettings set org.gnome.Vino require-encryption false

This modification must be done on the target computer before you attempt to connect, either in person or over SSH.

Red Hat Enterprise Linux 7 remoted to RHEL 8 with TightVNC

Use the option for an SSH tunnel to ensure that your VNC connection is fully encrypted.

Screen sharing through a browser

If network re-configuration is out of the question, sharing over an online meeting or collaboration platform is yet another option. The best open source platform for this is Nextcloud , which offers screen sharing over plain old HTTPS. With no firewall exceptions and no additional encryption required, Nextcloud's Talk app provides video and audio chat, plus whole-screen sharing using WebRTC technology.

This option requires a Nextcloud installation, but given that it's the best open source groupware package out there, it's probably worth looking at if you're not already running an instance. You can install Nextcloud yourself, or you can purchase hosting from Nextcloud.

To install the Talk app, go to Nextcloud's app store. Choose the Social & Communication category and then select the Talk plugin.

Next, add a user for the target computer's owner. Have them log into Nextcloud, and then click on the Talk app in the top left of the browser window.

When you start a new chat with your user, they'll be prompted by their browser to allow notifications from Nextcloud. Whether they accept or decline, Nextcloud's interface alerts them of the incoming call in the notification area at the top right corner.

Once you're in the call with your remote user, have them click on the Share screen button at the bottom of their chat window.

Remote screens

Screen sharing can be an easy method of support as long as you plan ahead so your network and clients support it from trusted sources. Integrate VNC into your support plan early, and use screen sharing to help your users get better at what they do. Topics: Linux Seth Kenlon Seth Kenlon is a free culture advocate and UNIX geek.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Faccessing-remote-desktops

[Jun 17, 2019] How to use tee command in Linux by Fahmida Yesmin

Several examples. Mostly trivial. But a couple are interesting.
Notable quotes:
"... `tee` command can be used to store the output of any command into more than one files. ..."
"... `tee` command with '-i' option is used in this example to ignore any interrupt at the time of command execution. ..."
Jun 17, 2019 | linuxhint.com

Example-3: Writing the output into multiple files

`tee` command can be used to store the output of any command into more than one files. You have to write the file names with space to do this task. Run the following commands to store the output of `date` command into two files, output1.txt, and output2.txt.

$ date | tee output1.txt output2.txt
$ cat output1.txt output2.txt

... ... ...

Example-4: Ignoring interrupt signal

`tee` command with '-i' option is used in this example to ignore any interrupt at the time of command execution. So, the command will execute properly even the user presses CTRL+C. Run the following commands from the terminal and check the output.

$ wc -l output.txt | tee -i output3.txt
$ cat output.txt
$ cat output3.txt

... ... ...

Example-5: Passing `tee` command output into another command

The output of the `tee` command can be passed to another command by using the pipe. In this example, the first command output is passed to `tee` command and the output of `tee` command is passed to another command. Run the following commands from the terminal.

$ ls | tee output4.txt | wc -lcw
$ ls
$ cat output4.txt

Output:
... ... ...

[Jun 10, 2019] Screen Command Examples To Manage Multiple Terminal Sessions

Jun 10, 2019 | www.ostechnix.com

OSTechNix

Screen Command Examples To Manage Multiple Terminal Sessions

by sk · Published June 6, 2019 · Updated June 7, 2019

Screen Command Examples To Manage Multiple Terminal Sessions GNU Screen is a terminal multiplexer (window manager). As the name says, Screen multiplexes the physical terminal between multiple interactive shells, so we can perform different tasks in each terminal session. All screen sessions run their programs completely independent. So, a program or process running inside a screen session will keep running even if the session is accidentally closed or disconnected. For instance, when upgrading Ubuntu server via SSH, Screen command will keep running the upgrade process just in case your SSH session is terminated for any reason.

The GNU Screen allows us to easily create multiple screen sessions, switch between different sessions, copy text between sessions, attach or detach from a session at any time and so on. It is one of the important command line tool every Linux admins should learn and use wherever necessary. In this brief guide, we will see the basic usage of Screen command with examples in Linux.

Installing GNU Screen

GNU Screen is available in the default repositories of most Linux operating systems.

To install GNU Screen on Arch Linux, run:

$ sudo pacman -S screen

On Debian, Ubuntu, Linux Mint:

$ sudo apt-get install screen

On Fedora:

$ sudo dnf install screen

On RHEL, CentOS:

$ sudo yum install screen

On SUSE/openSUSE:

$ sudo zypper install screen

Let us go ahead and see some screen command examples.

Screen Command Examples To Manage Multiple Terminal Sessions

The default prefix shortcut to all commands in Screen is Ctrl+a . You need to use this shortcut a lot when using Screen. So, just remember this keyboard shortcut.

Create new Screen session

Let us create a new Screen session and attach to it. To do so, type the following command in terminal:

screen

Now, run any program or process inside this session. The running process or program will keep running even if you're disconnected from this session.

Detach from Screen sessions

To detach from inside a screen session, press Ctrl+a and d . You don't have to press the both key combinations at the same time. First press Ctrl+a and then press d . After detaching from a session, you will see an output something like below.

[detached from 29149.pts-0.sk]

Here, 29149 is the screen ID and pts-0.sk is the name of the screen session. You can attach, detach and kill Screen sessions using either screen ID or name of the respective session.

Create a named session

You can also create a screen session with any custom name of your choice other than the default username like below.

screen -S ostechnix

The above command will create a new screen session with name "xxxxx.ostechnix" and attach to it immediately. To detach from the current session, press Ctrl+a followed by d .

Naming screen sessions can be helpful when you want to find which processes are running on which sessions. For example, when a setup LAMP stack inside a session, you can simply name it like below.

screen -S lampstack
Create detached sessions

Sometimes, you might want to create a session, but don't want to attach it automatically. In such cases, run the following command to create detached session named "senthil" :

screen -S senthil -d -m

Or, shortly:

screen -dmS senthil

The above command will create a session called "senthil", but won't attach to it.

List Screen sessions

To list all running sessions (attached or detached), run:

screen -ls

Sample output:

There are screens on:
	29700.senthil	(Detached)
	29415.ostechnix	(Detached)
	29149.pts-0.sk	(Detached)
3 Sockets in /run/screens/S-sk.

As you can see, I have three running sessions and all are detached.

Attach to Screen sessions

If you want to attach to a session at any time, for example 29415.ostechnix , simply run:

screen -r 29415.ostechnix

Or,

screen -r ostechnix

Or, just use the screen ID:

screen -r 29415

To verify if we are attached to the aforementioned session, simply list the open sessions and check.

screen -ls

Sample output:

There are screens on:
        29700.senthil   (Detached)
        29415.ostechnix (Attached)
        29149.pts-0.sk  (Detached)
3 Sockets in /run/screens/S-sk.

As you see in the above output, we are currently attached to 29415.ostechnix session. To exit from the current session, press ctrl+a, d.

Create nested sessions

When we run "screen" command, it will create a single session for us. We can, however, create nested sessions (a session inside a session).

First, create a new session or attach to an opened session. I am going to create a new session named "nested".

screen -S nested

Now, press Ctrl+a and c inside the session to create another session. Just repeat this to create any number of nested Screen sessions. Each session will be assigned with a number. The number will start from 0 .

You can move to the next session by pressing Ctrl+n and move to previous by pressing Ctrl+p .

Here is the list of important Keyboard shortcuts to manage nested sessions.

Lock sessions

Screen has an option to lock a screen session. To do so, press Ctrl+a and x . Enter your Linux password to lock the screen.

Screen used by sk <sk> on ubuntuserver.
Password:
Logging sessions

You might want to log everything when you're in a Screen session. To do so, just press Ctrl+a and H .

Alternatively, you can enable the logging when starting a new session using -L parameter.

screen -L

From now on, all activities you've done inside the session will recorded and stored in a file named screenlog.x in your $HOME directory. Here, x is a number.

You can view the contents of the log file using cat command or any text viewer applications.

Log screen sessions

Log screen sessions


Suggested read:


Kill Screen sessions

If a session is not required anymore, just kill it. To kill a detached session named "senthil":

screen -r senthil -X quit

Or,

screen -X -S senthil quit

Or,

screen -X -S 29415 quit

If there are no open sessions, you will see the following output:

$ screen -ls
No Sockets found in /run/screens/S-sk.

For more details, refer man pages.

$ man screen

There is also a similar command line utility named "Tmux" which does the same job as GNU Screen. To know more about it, refer the following guide.

Resource:

[Jun 06, 2019] For Profit College, Student Loan Default, and the Economic Impact of Student Loans

We should object to the neoliberal complete "instumentalization" of education: education became just a mean to get nicely paid job. And even this hope is mostly an illusion for all but the top 5% of students...
And while students share their own part of responsibility for accumulating the debt the predatory behaviour of neoliberal universities is an important factor that should not be discounted and perpetrators should be held responsible. Especially dirty tricks of ballooning its size and pushing students into "hopeless" specialties, which would be fine, if they were sons or daughters of well to do and parent still support then financially.
Actually neoliberalism justifies predatory behaviour and as such is a doomed social system as without solidarity some members of financial oligarchy that rules the country sooner or later might hand from the lampposts.
Notable quotes:
"... It also never ceases to amaze me the number of anti-educational opinions which flare up when the discussion of student loan default arises. There are always those who will prophesize there is no need to attain a higher level of education as anyone could be something else and be successful and not require a higher level of education. Or they come forth with the explanation on how young 18 year-olds and those already struggling should be able to ascertain the risk of higher debt when the cards are already stacked against them legally. ..."
"... There does not appear to be much movement on the part of Congress to reconcile the issues in favor of students as opposed to the non-profit and for profit institutes. ..."
"... It's easy to explain, really. According to the Department of Education ( https://studentaid.ed.gov/sa/repay-loans/understand/plans ) you're going to be paying off that loan at minimum payments for 25 years. Assuming your average bachelor's degree is about $30k if you go all-loans ( http://collegecost.ed.gov/catc/ ) and the average student loan interest rate is a generous 5% ( http://www.direct.ed.gov/calc.html ), you're going to be paying $175 a month for a sizable chunk of your adult life. ..."
"... Majoring in IT or Computer Science would have a been a great move in the late 1990's; however, if you graduated around 2000, you likely would have found yourself facing a tough job market.. Likewise, majoring in petroleum engineering or petroleum geology would have seemed like a good move a couple of years ago; however, now that oil prices are crashing, it's presumably a much tougher job market. ..."
"... To confuse going to college with vocational education is to commit a major category error. I think bright, ambitious high school graduates– who are looking for upward social mobility– would be far better served by a plumbing or carpentry apprenticeship program. A good plumber can earn enough money to send his or her children to Yale to study Dante, Boccaccio, and Chaucer. ..."
"... A bright working class kid who goes off to New Haven, to study medieval lit, will need tremendous luck to overcome the enormous class prejudice she will face in trying to establish herself as a tenure-track academic. If she really loves medieval literature for its own sake, then to study it deeply will be "worth it" even if she finds herself working as a barista or store-clerk. ..."
"... As a middle-aged doctoral student in the humanities you should not even be thinking much about your loans. Write the most brilliant thesis that you can, get a book or some decent articles published from it– and swim carefully in the shark-infested waters of academia until you reach the beautiful island of tenured full-professorship. If that island turns out to be an ever-receding mirage, sell your soul to our corporate overlords and pay back your loans! Alternatively, tune in, drop out, and use your finely tuned research and rhetorical skills to help us overthrow the kleptocratic regime that oppresses us all!! ..."
"... Genuine education should provide one with profound contentment, grateful for the journey taken, and a deep appreciation of life. ..."
"... Instead many of us are left confused – confusing career training (redundant and excessive, as it turned out, unfortunate for the student, though not necessarily bad for those on the supply side, one must begrudgingly admit – oops, there goes one's serenity) with enlightenment. ..."
"... We all should be against Big Educational-Complex and its certificates-producing factory education that does not put the student's health and happiness up there with co-existing peacefully with Nature. ..."
"... Remember DINKs? Dual Income No Kids. Dual Debt Bad Job No House No Kids doesn't work well for acronyms. Better for an abbreviated hash tag? ..."
"... I graduated law school with $100k+ in debt inclusive of undergrad. I've never missed a loan payment and my credit score is 830. my income has never reached $100k. my payments started out at over $1000 a month and through aggressive payment and refinancing, I've managed to reduce the payments to $500 a month. I come from a lower middle class background and my parents offered what I call 'negative help' throughout college. ..."
"... my unfortunate situation is unique and I wouldn't wish my debt on anyone. it's basically indentured servitude. it's awful, it's affects my life and health in ways no one should have to live, I have all sorts of stress related illnesses. I'm basically 2 months away from default of everything. my savings is negligible and my net worth is still negative 10 years after graduating. ..."
"... My story is very similar to yours, although I haven't had as much success whittling down my loan balances. But yes, it's made me a socialist as well; makes me wonder how many of us, i.e. ppl radicalized by student loans, are out there. Perhaps the elites' grand plan to make us all debt slaves will eventually backfire in more ways than via the obvious economic issues? ..."
Nov 09, 2015 | naked capitalism

It also never ceases to amaze me the number of anti-educational opinions which flare up when the discussion of student loan default arises. There are always those who will prophesize there is no need to attain a higher level of education as anyone could be something else and be successful and not require a higher level of education. Or they come forth with the explanation on how young 18 year-olds and those already struggling should be able to ascertain the risk of higher debt when the cards are already stacked against them legally. In any case during a poor economy, those with more education appear to be employed at a higher rate than those with less education. The issue for those pursuing an education is the ever increasing burden and danger of student loans and associated interest rates which prevent younger people from moving into the economy successfully after graduation, the failure of the government to support higher education and protect students from for-profit fraud, the increased risk of default and becoming indentured to the government, and the increased cost of an education which has surpassed healthcare in rising costs.

There does not appear to be much movement on the part of Congress to reconcile the issues in favor of students as opposed to the non-profit and for profit institutes.

Ranger Rick, November 9, 2015 at 11:34 am

It's easy to explain, really. According to the Department of Education ( https://studentaid.ed.gov/sa/repay-loans/understand/plans ) you're going to be paying off that loan at minimum payments for 25 years. Assuming your average bachelor's degree is about $30k if you go all-loans ( http://collegecost.ed.gov/catc/ ) and the average student loan interest rate is a generous 5% ( http://www.direct.ed.gov/calc.html ), you're going to be paying $175 a month for a sizable chunk of your adult life.

If you're merely hitting the median income of a bachelor's degree after graduation, $55k (http://nces.ed.gov/fastfacts/display.asp?id=77 ), and good luck with that in this economy, you're still paying ~31.5% of that in taxes (http://www.oecd.org/ctp/tax-policy/taxing-wages-20725124.htm ) you're left with $35.5k before any other costs. Out of that, you're going to have to come up with the down payment to buy a house and a car after spending more money than you have left (http://www.bls.gov/cex/csxann13.pdf).

Louis, November 9, 2015 at 12:33 pm

The last paragraph sums it up perfectly, especially the predictable counterarguments. Accurately assessing what job in demand several years down the road is very difficult, if not impossible.

Majoring in IT or Computer Science would have a been a great move in the late 1990's; however, if you graduated around 2000, you likely would have found yourself facing a tough job market.. Likewise, majoring in petroleum engineering or petroleum geology would have seemed like a good move a couple of years ago; however, now that oil prices are crashing, it's presumably a much tougher job market.

Do we blame the computer science majors graduating in 2000 or the graduates struggling to break into the energy industry, now that oil prices have dropped, for majoring in "useless" degrees? It's much easier to create a strawman about useless degrees that accept the fact that there is a element of chance in terms of what the job market will look like upon graduation.

The cost of higher education is absurd and there simply aren't enough good jobs to go around-there are people out there who majored in the "right" fields and have found themselves underemployed or unemployed-so I'm not unsympathetic to the plight of many people in my generation.

At the same time, I do believe in personal responsibility-I'm wary of creating a moral hazard if people can discharge loans in bankruptcy. I've been paying off my student loans (grad school) for a couple of years-I kept the level debt below any realistic starting salary-and will eventually have the loans paid off, though it may be a few more years.

I am really conflicted between believing in personal responsibility but also seeing how this generation has gotten screwed. I really don't know what the right answer is.

Ulysses, November 9, 2015 at 1:47 pm

"The cost of higher education is absurd and there simply aren't enough good jobs to go around-there are people out there who majored in the "right" fields and have found themselves underemployed or unemployed-so I'm not unsympathetic to the plight of many people in my generation."

To confuse going to college with vocational education is to commit a major category error. I think bright, ambitious high school graduates– who are looking for upward social mobility– would be far better served by a plumbing or carpentry apprenticeship program. A good plumber can earn enough money to send his or her children to Yale to study Dante, Boccaccio, and Chaucer.

A bright working class kid who goes off to New Haven, to study medieval lit, will need tremendous luck to overcome the enormous class prejudice she will face in trying to establish herself as a tenure-track academic. If she really loves medieval literature for its own sake, then to study it deeply will be "worth it" even if she finds herself working as a barista or store-clerk.

None of this, of course, excuses the outrageously high tuition charges, administrative salaries, etc. at the "top schools." They are indeed institutions that reinforce class boundaries. My point is that strictly career education is best begun at a less expensive community college. After working in the IT field, for example, a talented associate's degree-holder might well find that her employer will subsidize study at an elite school with an excellent computer science program.

My utopian dream would be a society where all sorts of studies are open to everyone– for free. Everyone would have a basic Job or Income guarantee and could study as little, or as much, as they like!

Ulysses, November 9, 2015 at 2:05 pm

As a middle-aged doctoral student in the humanities you should not even be thinking much about your loans. Write the most brilliant thesis that you can, get a book or some decent articles published from it– and swim carefully in the shark-infested waters of academia until you reach the beautiful island of tenured full-professorship.

If that island turns out to be an ever-receding mirage, sell your soul to our corporate overlords and pay back your loans! Alternatively, tune in, drop out, and use your finely tuned research and rhetorical skills to help us overthrow the kleptocratic regime that oppresses us all!!

subgenius, November 9, 2015 at 3:07 pm

except (in my experience) the corporate overlords want young meat.

I have 2 masters degrees 2 undergraduate degrees and a host of random diplomas – but at 45, I am variously too old, too qualified, or lacking sufficient recent corporate experience in the field to get hired

Trying to get enough cash to get a contractor license seems my best chance at anything other than random day work.

MyLessThanPrimeBeef, November 9, 2015 at 3:41 pm

Genuine education should provide one with profound contentment, grateful for the journey taken, and a deep appreciation of life.

Instead many of us are left confused – confusing career training (redundant and excessive, as it turned out, unfortunate for the student, though not necessarily bad for those on the supply side, one must begrudgingly admit – oops, there goes one's serenity) with enlightenment.

"I would spend another 12 soul-nourishing years pursuing those non-profit degrees' vs 'I can't feed my family with those paper certificates.'

jrs, November 9, 2015 at 2:55 pm

I am anti-education as the solution to our economic woes. We need jobs or a guaranteed income. And we need to stop outsourcing the jobs that exist. And we need a much higher minimum wage. And maybe we need work sharing. I am also against using screwdrivers to pound in a nail. But why are you so anti screwdriver anyway?

And I see calls for more and more education used to make it seem ok to pay people without much education less than a living wage. Because they deserve it for being whatever drop outs. And it's not ok.

I don't actually have anything against the professors (except their overall political cowardice in times demanding radicalism!). Now the administrators, yea I can see the bloat and the waste there. But mostly, I have issues with more and more education being preached as the answer to a jobs and wages crisis.

MyLessThanPrimeBeef -> jrs, November 9, 2015 at 3:50 pm

We all should be against Big Educational-Complex and its certificates-producing factory education that does not put the student's health and happiness up there with co-existing peacefully with Nature.

Kris Alman, November 9, 2015 at 11:11 am

Remember DINKs? Dual Income No Kids. Dual Debt Bad Job No House No Kids doesn't work well for acronyms. Better for an abbreviated hash tag?

debitor serf, November 9, 2015 at 7:17 pm

I graduated law school with $100k+ in debt inclusive of undergrad. I've never missed a loan payment and my credit score is 830. my income has never reached $100k. my payments started out at over $1000 a month and through aggressive payment and refinancing, I've managed to reduce the payments to $500 a month. I come from a lower middle class background and my parents offered what I call 'negative help' throughout college.

my unfortunate situation is unique and I wouldn't wish my debt on anyone. it's basically indentured servitude. it's awful, it's affects my life and health in ways no one should have to live, I have all sorts of stress related illnesses. I'm basically 2 months away from default of everything. my savings is negligible and my net worth is still negative 10 years after graduating.

student loans, combined with a rigged system, turned me into a closeted socialist. I am smart, hard working and resourceful. if I can't make it in this world, heck, then who can? few, because the system is rigged!

I have no problems at all taking all the wealth of the oligarchs and redistributing it. people look at me like I'm crazy. confiscate it all I say, and reset the system from scratch. let them try to make their billions in a system where things are fair and not rigged...

Ramoth, November 9, 2015 at 9:23 pm

My story is very similar to yours, although I haven't had as much success whittling down my loan balances. But yes, it's made me a socialist as well; makes me wonder how many of us, i.e. ppl radicalized by student loans, are out there. Perhaps the elites' grand plan to make us all debt slaves will eventually backfire in more ways than via the obvious economic issues?

[May 24, 2019] Deal with longstanding issues like government favoritism toward local companies

May 24, 2019 | theregister.co.uk

How is it that that can be a point of contention ? Name me one country in this world that doesn't favor local companies.

These people company representatives who are complaining about local favoritism would be howling like wolves if Huawei was given favor in the US over any one of them.

I'm not saying that there are no reasons to be unhappy about business with China, but that is not one of them. 6 0 Reply


A.P. Veening , 1 day

Re: "deal with longstanding issues like government favoritism toward local companies"

Name me one country in this world that doesn't favor local companies.

I'll give you two: Liechtenstein and Vatican City, though admittedly neither has a lot of local companies.

STOP_FORTH , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

Doesn't Liechtenstein make most of the dentures in the EU. Try taking a bite out of that market.

Kabukiwookie , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

How can you leave Andorra out of that list?

A.P. Veening , 14 hrs
Re: "deal with longstanding issues like government favoritism toward local companies"

While you are at it, how can you leave Monaco and San Marino out of that list?

[May 24, 2019] Huawei equipment can't be trusted? As distinct from Cisco which we already have backdoored :]

May 24, 2019 | theregister.co.uk

" The Trump administration, backed by US cyber defense experts, believes that Huawei equipment can't be trusted " .. as distinct from Cisco which we already have backdoored :]

Sir Runcible Spoon
Re: Huawei equipment can't be trusted?

Didn't someone once say "I don't trust anyone who can't be bribed"?

Not sure why that popped into my head.