Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Programmable Keyboards
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Recovery of LVM partitions Notes on hard drives partitioning for Linux Top Vulnerabilities in Linux Environment Root filesystem is mounted read only on boot
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoer File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Unix System Monitoring Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Open source politics: IBM acquires Red Hat Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bosos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workaholism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1. Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Excessive automation can be a problem. It increase the number of layers between fundamental process and sysadmin. and thus it makes troubleshooting much harder. Moreover often it does not not produce tangible benefits in comparison with simpler tools while increasing the level of complexity of environment.  See Unix Configuration Management Tools for deeper discussion of this issue.
  3. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of 'efficiency" which in reality is more connected with the size of top brass bonuses then anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case the are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. Often the only job they can find is job  in Home Depot, or similar retail outlets.  See Over 50 and unemployed
  4. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). "Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface is more important than the substance. The marketing brochures and manuals are no different from mainstream news media in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  5. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.  See Office Space (1999) - IMDb  for humorous take on IT culture.  That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has  tremendously more power in the decision-making process and eats up more of the corporate budget, while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks.
  6. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.  They are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink,   and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them (which of course is the mark of any good sysadmin, but should not the only source of new knowledge  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM

Nobody is now surprised to see a server with 128GB of RAM, laptop with  16Gb of RAM, or cellphones with  4GB of RAM and 1GHZ CPU (Please not that IBM Pc stated with 1 MBof RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Such changes while  painful are inevitable and hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes. .

 The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place like NYC or SF rent and property prices and valuations are growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Mar 23, 2020] Copy Specific File Types While Keeping Directory Structure In Linux by sk

I think this approach is way too complex. A simpler and more reliable approach is first to create directory structure and then as the second statge to copy files.
Use of cp command optionis interesting though
Notable quotes:
"... create the intermediate parent directories if needed to preserve the parent directory structure. ..."
Mar 19, 2020 | www.ostechnix.com

[Mar 23, 2020] How to setup nrpe for client side monitoring - LinuxConfig.org

Mar 23, 2020 | linuxconfig.org

In this tutorial you will learn:

[Mar 12, 2020] 7 tips to speed up your Linux command line navigation Enable Sysadmin

Mar 12, 2020 | www.redhat.com

A bonus shortcut

You can use the keyboard combination, Alt+. , to repeat the last argument.

Note: The shortcut is Alt+. (dot).

$ mkdir /path/to/mydir

$ cd Alt.

You are now in the /path/to/mydir directory.

[Mar 05, 2020] Using Ctags with MC

Mar 05, 2020 | frankhesse.wordpress.com

the Midnight Commander's built-in editor turned out to be. Below is one of the features of mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through the code.

Code Navigation
Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility, for this you need to run the following commands:

$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -

or
$ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e -L-

')

me marginwidth=


After the utility completes, a TAGS file will appear in the root directory of our project, which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the functions of variables or properties of the object under study.

Using
Imagine that we need to determine the place where the definition of the locked property of an edit object is located in some source code of a rather large project.


/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}

me marginwidth=

To do this, put the cursor at the end of the word locked and press alt + enter , a list of possible options appears, as in the screenshot below.
image

After selecting the desired option, we get to the line with the definition.

[Mar 05, 2020] How to switch the editor in mc (midnight commander) from nano to mcedit?

Jan 01, 2014 | askubuntu.com

Ask Question Asked 9 years, 2 months ago Active 6 months ago Viewed 123k times

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html


sdu ,

Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the internal mc editor (mcedit)?

Isaiah ,

Press the following keys in order, one at a time:
  1. F9 Activates the top menu.
  2. o Selects the Option menu.
  3. c Opens the configuration dialog.
  4. i Toggles the use internal edit option.
  5. s Saves your preferences.

Hurnst , 2014-06-21 02:34:51

Run MC as usual. On the command line right above the bottom row of menu selections type select-editor . This should open a menu with a list of all of your installed editors. This is working for me on all my current linux machines.

, 2010-12-09 18:07:18

You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor

You will get an list of the installed editors on your system, and you can chose your favorite.

AntonioK , 2015-01-27 07:06:33

If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit

> ,

Open Midnight Commander, go to Options -> Configuration and check "use internal editor" Hit save and you are done.

[Mar 05, 2020] How to change your hostname in Linux Enable Sysadmin

Notable quotes:
"... pretty ..."
"... transient ..."
"... Want to try out Red Hat Enterprise Linux? Download it now for free. ..."
Mar 05, 2020 | www.redhat.com

How to change your hostname in Linux What's in a name, you ask? Everything. It's how other systems, services, and users "see" your system.

Posted March 3, 2020 | by Tyler Carrigan (Red Hat)

Image
Image by Pixabay
More Linux resources

Your hostname is a vital piece of system information that you need to keep track of as a system administrator. Hostnames are the designations by which we separate systems into easily recognizable assets. This information is especially important to make a note of when working on a remotely managed system. I have experienced multiple instances of companies changing the hostnames or IPs of storage servers and then wondering why their data replication broke. There are many ways to change your hostname in Linux; however, in this article, I'll focus on changing your name as viewed by the network (specifically in Red Hat Enterprise Linux and Fedora).

Background

A quick bit of background. Before the invention of DNS, your computer's hostname was managed through the HOSTS file located at /etc/hosts . Anytime that a new computer was connected to your local network, all other computers on the network needed to add the new machine into the /etc/hosts file in order to communicate over the network. As this method did not scale with the transition into the world wide web era, DNS was a clear way forward. With DNS configured, your systems are smart enough to translate unique IPs into hostnames and back again, ensuring that there is little confusion in web communications.

Modern Linux systems have three different types of hostnames configured. To minimize confusion, I list them here and provide basic information on each as well as a personal best practice:

It is recommended to pick a pretty hostname that is unique and not easily confused with other systems. Allow the transient and static names to be variations on the pretty, and you will be good to go in most circumstances.

Working with hostnames

Now, let's look at how to view your current hostname. The most basic command used to see this information is hostname -f . This command displays the system's fully qualified domain name (FQDN). To relate back to the three types of hostnames, this is your transient hostname. A better way, at least in terms of the information provided, is to use the systemd command hostnamectl to view your transient hostname and other system information:

Image

Before moving on from the hostname command, I'll show you how to use it to change your transient hostname. Using hostname <x> (where x is the new hostname), you can change your network name quickly, but be careful. I once changed the hostname of a customer's server by accident while trying to view it. That was a small but painful error that I overlooked for several hours. You can see that process below:

Image

It is also possible to use the hostnamectl command to change your hostname. This command, in conjunction with the right flags, can be used to alter all three types of hostnames. As stated previously, for the purposes of this article, our focus is on the transient hostname. The command and its output look something like this:

Image

The final method to look at is the sysctl command. This command allows you to change the kernel parameter for your transient name without having to reboot the system. That method looks something like this:

Image GNOME tip

Using GNOME, you can go to Settings -> Details to view and change the static and pretty hostnames. See below:

Image Wrapping up

I hope that you found this information useful as a quick and easy way to manipulate your machine's network-visible hostname. Remember to always be careful when changing system hostnames, especially in enterprise environments, and to document changes as they are made.

Want to try out Red Hat Enterprise Linux? Download it now for free. Topics: Linux Tyler Carrigan Tyler is a community manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. More about me

[Mar 05, 2020] Micro data center

Mar 05, 2020 | en.wikipedia.org

A micro data center ( MDC ) is a smaller or containerized (modular) data center architecture that is designed for computer workloads not requiring traditional facilities. Whereas the size may vary from rack to container, a micro data center may include fewer than four servers in a single 19-inch rack. It may come with built-in security systems, cooling systems, and fire protection. Typically there are standalone rack-level systems containing all the components of a 'traditional' data center, [1] including in-rack cooling, power supply, power backup, security, fire and suppression. Designs exist where energy is conserved by means of temperature chaining , in combination with liquid cooling. [2]

In mid-2017, technology introduced by the DOME project was demonstrated enabling 64 high-performance servers, storage, networking, power and cooling to be integrated in a 2U 19" rack-unit. This packaging, sometimes called 'datacenter-in-a-box' allows deployments in spaces where traditional data centers do not fit, such as factory floors ( IOT ) and dense city centers, especially for edge-computing and edge-analytics.

MDCs are typically portable and provide plug and play features. They can be rapidly deployed indoors or outdoors, in remote locations, for a branch office, or for temporary use in high-risk zones. [3] They enable distributed workloads , minimizing downtime and increasing speed of response.

[Mar 05, 2020] What's next for data centers Think micro data centers by Larry Dignan

Apr 14, 2019 | www.zdnet.com

A micro data center, a mini version of a data center rack, could work as edge computing takes hold in various industries. Here's a look at the moving parts behind the micro data center concept.

[Mar 05, 2020] The 3-2-1 rule for backups says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site

Mar 05, 2020 | www.networkworld.com

As the number of places where we store data increases, the basic concept of what is referred to as the 3-2-1 rule often gets forgotten. This is a problem, because the 3-2-1 rule is easily one of the most foundational concepts for designing . It's important to understand why the rule was created, and how it's currently being interpreted in an increasingly tapeless world.

What is the 3-2-1 rule for backup?

The 3-2-1 rule says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site. Let's take a look at each of the three elements and what it addresses.

Mind the air gap

An air gap is a way of securing a copy of data by placing it on a machine on a network that is physically separate from the data it is backing up. It literally means there is a gap of air between the primary and the backup. This air gap accomplishes more than simple disaster recovery; it is also very useful for protecting against hackers.

If all backups are accessible via the same computers that might be attacked, it is possible that a hacker could use a compromised server to attack your backup server. By separating the backup from the primary via an air gap, you make it harder for a hacker to pull that off. It's still not impossible, just harder.

Everyone wants an air gap. The discussion these days is how to accomplish an air gap without using tapes.Back in the days of tape backup, it was easy to provide an air gap. You made a backup copy of your data and put it in a box, then you handed it to an Iron Mountain driver. Instantly, there was a gap of air between your primary and your backup. It was close to impossible for a hacker to attack both the primary and the backup.

That is not to say it was impossible; it just made it harder. For hackers to attack your secondary copy, they needed to resort to a physical attack via social engineering. You might think that tapes stored in an off-site storage facility would be impervious to a physical attack via social engineering, but that is definitely not the case. (I have personally participated in white hat attacks of off-site storage facilities, successfully penetrated them and been left unattended with other people's backups.) Most hackers don't resort to physical attacks because they are just too risky, so air-gapping backups greatly reduces the risk that they will be compromised.

Faulty 3-2-1 implementations

Many things that pass for backup systems now do not pass even the most liberal interpretation of the 3-2-1 rule. A perfect example of this would be various cloud-based services that store the backups on the same servers and the same storage facility that they are protecting, ignoring the "2" and the "1" in this important rule.

[Mar 05, 2020] Cloud computing More costly, complicated and frustrating than expected by Daphne Leprince-Ringuet

Highly recommended!
Costs estimate in optimistic spreadsheets and cost in actual life for large scale moves tot he cloud are very different. Now companies that jumped into cloud bandwagon discover that saving are illusionary and control over infrastructure is difficult. As well as cloud provider now control their future.
Notable quotes:
"... On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises. ..."
"... Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. ..."
"... The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation. ..."
"... Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs. ..."
"... As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ..."
Feb 27, 2020 | www.zdnet.com

Cloud computing More costly, complicated and frustrating than expected - but still essential ZDNet

A new report by Capita shows that UK businesses are growing disillusioned by their move to the cloud. It might be because they are focusing too much on the wrong goals. Migrating to the cloud seems to be on every CIO's to-do list these days. But despite the hype, almost 60% of UK businesses think that cloud has over-promised and under-delivered, according to a report commissioned by consulting company Capita.

The research surveyed 200 IT decision-makers in the UK, and found that an overwhelming nine in ten respondents admitted that cloud migration has been delayed in their organisation due to "unforeseen factors".

On average, businesses started planning their migration to the cloud in 2015, and kicked off the process in 2016. According to the report, one reason clearly stood out as the push factor to adopt cloud computing : 61% of businesses started the move primarily to reduce the costs of keeping data on-premises.

But with organisations setting aside only one year to prepare for migration, which the report described as "less than adequate planning time," it is no surprise that most companies have encountered stumbling blocks on their journey to the cloud.

Capita's head of cloud and platform Wasif Afghan told ZDNet: "There has been a sort of hype about cloud in the past few years. Those who have started migrating really focused on cost saving and rushed in without a clear strategy. Now, a high percentage of enterprises have not seen the outcomes they expected. "

Four years later, in fact, less than half (45%) of the companies' workloads and applications have successfully migrated, according to Capita. A meager 5% of respondents reported that they had not experienced any challenge in cloud migration; but their fellow IT leaders blamed security issues and the lack of internal skills as the main obstacles they have had to tackle so far.

Half of respondents said that they had to re-architect more workloads than expected to optimise them for the cloud. Afghan noted that many businesses have adopted a "lift and shift" approach, taking everything they were storing on premises and shifting it into the public cloud. "Except in some cases, you need to re-architect the application," said Afghan, "and now it's catching up with organisations."

The challenges "continue to spiral," noted Capita's report, and they are not going away; what's more, they come at a cost. Up to 58% of organisations said that moving to the cloud has been more expensive than initially thought. The trend is not only confined to the UK: the financial burden of moving to the cloud is a global concern. Research firm Canalys found that organisations splashed out a record $107 billion (£83 billion) for cloud computing infrastructure last year, up 37% from 2018, and that the bill is only set to increase in the next five years. Afghan also pointed to recent research by Gartner, which predicted that through 2020, 80% of organisations will overshoot their cloud infrastructure budgets because of their failure to manage cost optimisation.

Infrastructure, however, is not the only cost of moving to the cloud. IDC analysed the overall spending on cloud services, and predicted that investments will reach $500 billion (£388.4 billion) globally by 2023. Clearly, the escalating costs of switching to the cloud is coming as a shock to some businesses - especially so because they started the move to cut costs.

Afghan said: "From speaking to clients, it is pretty clear that cloud expense is one of their chief concerns. The main thing on their minds right now is how to control that spend." His response to them, he continued, is better planning. "If you decide to move an application in the cloud, make sure you architect it so that you get the best return on investment," he argued. "And then monitor it. The cloud is dynamic - it's not a one-off event."

Capita's research did found that IT leaders still have faith in the cloud, with the majority (86%) of respondents agreeing that the benefits of the cloud will outweigh its downsides. But on the other hand, only a third of organisations said that labour and logistical costs have decreased since migrating; and a minority (16%) said they were "extremely satisfied" with the move.

"Most organisations have not yet seen the full benefits or transformative potential of their cloud investments," noted the report.

As a result, IT leaders are left feeling frustrated and underwhelmed by the promises of cloud technology ...

Cloud Cloud computing: Spending is breaking records, Microsoft Azure slowly closes the gap on AWS

[Mar 05, 2020] How to tell if you're using a bash builtin in Linux

Mar 05, 2020 | www.networkworld.com

One quick way to determine whether the command you are using is a bash built-in or not is to use the command "command". Yes, the command is called "command". Try it with a -V (capital V) option like this:

$ command -V command
command is a shell builtin
$ command -V echo
echo is a shell builtin
$ command -V date
date is hashed (/bin/date)

When you see a "command is hashed" message like the one above, that means that the command has been put into a hash table for quicker lookup.

... ... ... How to tell what shell you're currently using

If you switch shells you can't depend on $SHELL to tell you what shell you're currently using because $SHELL is just an environment variable that is set when you log in and doesn't necessarily reflect your current shell. Try ps -p $$ instead as shown in these examples:

$ ps -p $$
  PID TTY          TIME CMD
18340 pts/0    00:00:00 bash    <==
$ /bin/dash
$ ps -p $$
  PID TTY          TIME CMD
19517 pts/0    00:00:00 dash    <==

Built-ins are extremely useful and give each shell a lot of its character. If you use some particular shell all of the time, it's easy to lose track of which commands are part of your shell and which are not.

Differentiating a shell built-in from a Linux executable requires only a little extra effort.

[Mar 05, 2020] Bash IDE - Visual Studio Marketplace

Notable quotes:
"... all your shell scripts ..."
Mar 05, 2020 | marketplace.visualstudio.com
Bash IDE

Visual Studio Code extension utilizing the bash language server , that is based on Tree Sitter and its grammar for Bash and supports explainshell integration.

Features Configuration

To get documentation for flags on hover (thanks to explainshell), run the explainshell Docker container :

docker run --rm --name bash-explainshell -p 5000:5000 chrismwendt/codeintel-bash-with-explainshell

And add this to your VS Code settings:

    "bashIde.explainshellEndpoint": "http://localhost:5000",

For security reasons, it defaults to "" , which disables explainshell integration. When set, this extension will send requests to the endpoint and displays documentation for flags.

Once https://github.com/idank/explainshell/pull/125 is merged, it would be possible to set this to "https://explainshell.com" , however doing this is not recommended as it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running Docker image.

[Mar 04, 2020] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Jan 01, 2019 | stackoverflow.com

A command-line HTML pretty-printer: Making messy HTML readable [closed] Ask Question Asked 10 years, 1 month ago Active 10 months ago Viewed 51k times


knorv ,

Closed. This question is off-topic . It is not currently accepting answers.

jonjbar ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

tidy inputfile.html

Paul Brit ,

Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly installed.
brew install tidy-html5

Original reply:

Tidy from OS X doesn't support HTML5 . But there is experimental branch on Github which does.

To get it:

 brew tap homebrew/dupes
 brew install tidy --HEAD
 brew untap homebrew/dupes

That's it! Have fun!

Boris , 2019-11-16 01:27:35

Error: No available formula with the name "tidy" . brew install tidy-html5 works. – Pysis Apr 4 '17 at 13:34

[Feb 29, 2020] files - How to get over device or resource busy

Jan 01, 2011 | unix.stackexchange.com

ripper234 , 2011-04-13 08:51:26

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

camh , 2011-04-13 09:22:46

The tool you want is lsof , which stands for list open files .

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path , so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

kip2 , 2014-04-03 01:24:22

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

BillThor ,

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

user73011 ,

Here is the solution:
  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.

Choylton B. Higginbottom ,

I had this same issue, built a one-liner starting with @camh recommendation:
lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs kill -9

The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used -9 on kill, others might have safer options.

user5359531 ,

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

gloriphobia , 2017-03-23 12:56:22

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

Hopefully this can help someone else who comes across this problem!

bil , 2018-04-04 14:10:20

Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this
ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it.

Prabhat Kumar Singh , 2017-08-01 08:07:36

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information

[Feb 28, 2020] linux - Convert a time span in seconds to formatted time in shell - Stack Overflow

Jan 01, 2012 | stackoverflow.com

Convert a time span in seconds to formatted time in shell Ask Question Asked 7 years, 3 months ago Active 2 years ago Viewed 43k times


Darren , 2012-11-16 18:59:53

I have a variable of $i which is seconds in a shell script, and I am trying to convert it to 24 HOUR HH:MM:SS. Is this possible in shell?

sampson-chen , 2012-11-16 19:17:51

Here's a fun hacky way to do exactly what you are looking for =)
date -u -d @${i} +"%T"

Explanation:

glenn jackman ,

Another approach: arithmetic
i=6789
((sec=i%60, i/=60, min=i%60, hrs=i/60))
timestamp=$(printf "%d:%02d:%02d" $hrs $min $sec)
echo $timestamp

produces 1:53:09

Alan Tam , 2014-02-17 06:48:21

The -d argument applies to date from coreutils (Linux) only.

In BSD/OS X, use

date -u -r $i +%T

kossboss , 2015-01-07 13:43:36

Here is my algo/script helpers on my site: http://ram.kossboss.com/seconds-to-split-time-convert/ I used this elogant algo from here: Convert seconds to hours, minutes, seconds
convertsecs() {
 ((h=${1}/3600))
 ((m=(${1}%3600)/60))
 ((s=${1}%60))
 printf "%02d:%02d:%02d\n" $h $m $s
}
TIME1="36"
TIME2="1036"
TIME3="91925"

echo $(convertsecs $TIME1)
echo $(convertsecs $TIME2)
echo $(convertsecs $TIME3)

Example of my second to day, hour, minute, second converter:

# convert seconds to day-hour:min:sec
convertsecs2dhms() {
 ((d=${1}/(60*60*24)))
 ((h=(${1}%(60*60*24))/(60*60)))
 ((m=(${1}%(60*60))/60))
 ((s=${1}%60))
 printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
 # PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
 # printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with         backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`

# OUTPUT WOULD BE LIKE THIS (If none pretty printf used): 
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used): 
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s

Basile Starynkevitch ,

If $i represents some date in second since the Epoch, you could display it with
  date -u -d @$i +%H:%M:%S

but you seems to suppose that $i is an interval (e.g. some duration) not a date, and then I don't understand what you want.

Shilv , 2016-11-24 09:18:57

I use C shell, like this:
#! /bin/csh -f

set begDate_r = `date +%s`
set endDate_r = `date +%s`

set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`

echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases date -d @"$i" +%T

[Feb 22, 2020] How To Use Rsync to Sync Local and Remote Directories on a VPS by Justin Ellingwood

Feb 22, 2020 | www.digitalocean.com

... ... ...

Useful Options for Rsync


Rsync provides many options for altering the default behavior of the utility. We have already discussed some of the more necessary flags.

If you are transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z option:

[Feb 18, 2020] Automation Armageddon: a Legitimate Worry? reviewed the history of automation, focused on projections of gloom-and-doom by Michael Olenick

Relatively simple automation often beat more complex system. By far.
Notable quotes:
"... My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators. ..."
"... I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell. ..."
"... One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active. ..."
"... "Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine. ..."
"... You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. ..."
"... I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization. ..."
"... I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV. ..."
"... Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question. ..."
"... The Ten Commandments do not apply to corporations. ..."
"... But what happens when the bread machine is connected to the internet, can't function without an active internet connection, and requires an annual subscription to use? ..."
"... Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits. ..."
"... When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot". ..."
"... The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate. ..."
"... And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble. ..."
Oct 25, 2019 | www.nakedcapitalism.com

By Michael Olenick, a research fellow at INSEAD who writes regularly at Olen on Economics and Innowiki . Originally published at Innowiki

Part I , "Automation Armageddon: a Legitimate Worry?" reviewed the history of automation, focused on projections of gloom-and-doom.

"It smells like death," is how a friend of mine described a nearby chain grocery store. He tends to exaggerate and visiting France admittedly brings about strong feelings of passion. Anyway, the only reason we go there is for things like foil or plastic bags that aren't available at any of the smaller stores.

Before getting to why that matters – and, yes, it does matter – first a tasty digression.

I live in a French village. To the French, high-quality food is a vital component to good life.

My daughter counts eight independent bakeries on the short drive between home and school. Most are owned by a couple of people. Counting high-quality bakeries embedded in grocery stores would add a few more. Going out of our way more than a minute or two would more than double that number.

Typical Bakery: Bread is cooked at least twice daily

Despite so many, the bakeries seem to do well. In the half-decade I've been here, three new ones opened and none of the old ones closed. They all seem to be busy. Bakeries are normally owner operated. The busiest might employ a few people but many are mom-and-pop operations with him baking and her selling. To remain economically viable, they rely on a dance of people and robots. Flour arrives in sacks with high-quality grains milled by machines. People measure ingredients, with each bakery using slightly different recipes. A human-fed robot mixes and kneads the ingredients into the dough. Some kind of machine churns the lumps of dough into baguettes.

https://www.youtube.com/embed/O22jWIjcdaY?feature=oembed


Baguette Forming Machine: This would make a good animated GIF

The baker places the formed baguettes onto baking trays then puts them in the oven. Big ovens maintain a steady temperature while timers keep track of how long various loaves of bread have been baking. Despite the sensors, bakers make the final decision when to pull the loaves out, with some preferring a bien cuit more cooked flavor and others a softer crust. Finally, a person uses a robot in the form of a cash register to ring up transactions and processes payments, either by cash or card.

Nobody -- not the owners, workers, or customers -- think twice about any of this. I doubt most people realize how much automation technology is involved or even that much of the equipment is automation tech. There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost. The baguette forming machines churn out exactly what a person would do by hand, only faster and at a far lower cost. We take the thermostatically controlled ovens for granted. However, for anybody who has tried to cook over wood controlling heat via air and fuel, thermostatically controlled ovens are clearly automation technology.

Is the cash register really a robot? James Ritty, who invented it, didn't think so; he sold the patent for cheap. The person who bought the patent built it into NCR, a seminal company laying the groundwork of the modern computer revolution.

Would these bakeries be financially viable if forced to do all this by hand? Probably not. They'd be forced to produce less output at higher cost; many would likely fail. Bread would cost more leaving less money for other purchases. Fewer jobs, less consumer spending power, and hungry bellies to boot; that doesn't sound like good public policy.

Getting back to the grocery store my friend thinks smells like death; just a few weeks ago they started using robots in a new and, to many, not especially welcome way.

As any tourist knows, most stores in France are closed on Sunday afternoons, including and especially grocery stores. That's part of French labor law: grocery stores must close Sunday afternoons. Except that the chain grocery store near me announced they are opening Sunday afternoon. How? Robots, and sleight-of-hand. Grocers may not work on Sunday afternoons but guards are allowed.

Not my store but similar.

Dimanche means Sunday. Aprés-midi means afternoon.

I stopped in to get a feel for how the system works. Instead of grocers, the store uses security guards and self-checkout kiosks.

When you step inside, a guard reminds you there are no grocers. Nobody restocks the shelves but, presumably for half a day, it doesn't matter. On Sunday afternoons, in place of a bored-looking person wearing a store uniform and overseeing the robo-checkout kiosks sits a bored-looking person wearing a security guard uniform doing the same. There are no human-assisted checkout lanes open but this store seldom has more than one operating anyway.

I have no idea how long the French government will allow this loophole to continue. I thought it might attract yellow vest protestors or at least a cranky store worker – maybe a few locals annoyed at an ancient tradition being buried – but there was nobody complaining. There were hardly any customers, either.

The use of robots to sidestep labor law and replace people, in one of the most labor-friendly countries in the world, produced a big yawn.

Paul Krugman and Matt Stoller argue convincingly that it's the bosses, not the robots, that crush the spirits and souls of workers. Krugman calls it "automation obsession" and Stoller points out predictions of robo-Armageddon have existed for decades. The well over 100+ examples I have of major automation-tech ultimately led to more jobs, not fewer.

Jerry Yang envisions some type of forthcoming automation-induced dystopia. Zuck and the tech-bros argue for a forthcoming Star Trek style robo-utopia.

My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators.

It's an admittedly blander vision of the future; neither utopian nor dystopian, at least not one fueled by automation tech. However, it's a vision supported by the historic adoption of automation technology.


The Rev Kev , October 25, 2019 at 10:46 am

I have no real disagreement with a lot of automation. But how it is done is another matter altogether. Using the main example in this article, Australia is probably like a lot of countries with bread in that most of the loaves that you get in a supermarket are typically bland and come in plastic bags but which are cheap. You only really know what you grow up with.

When I first went to Germany I stepped into a Bakerie and it was a revelation. There were dozens of different sorts and types of bread on display with flavours that I had never experienced. I didn't know whether to order a loaf or to go for my camera instead. And that is the point. Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle.

We are all familiar with crapification and I contend that it is automation that enables this to become a thing.

WobblyTelomeres , October 25, 2019 at 11:08 am

"I contend that it is automation that enables this to become a thing."

As does electricity. And math. Automation doesn't necessarily narrow choices; economies of scale and the profit motive do. What I find annoying (as in pollyannish) is the avoidance of the issue of those that cannot operate the machinery, those that cannot open their own store, etc.

I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell.

washparkhorn , October 26, 2019 at 3:25 am

A capitalist market that fails to account for the cost of a product's negative externalities is underpricing (and incentivizing more of the same). It's cheating (or sanctioned cheating due to ignorance and corruption). It is not capitalism (unless that is the only reasonable outcome of capitalism).

Tom Pfotzer , October 25, 2019 at 11:33 am

The author's vision of "appropriate tech" local enterprise supported by relatively simple automation is also my answer to the vexing question of "how do I cope with automation?"

In a recent posting here at NC, I said the way to cope with automation of your job(s) is to get good at automation. My remark caused a howl of outrage: "most people can't do automation! Your solution is unrealistic for the masses. Dismissed with prejudice!".

Thank you for that outrage, as it provides a wonder foil for this article. The article shows a small business which learned to re-design business processes, acquire machines that reduce costs. It's a good example of someone that "got good at automation". Instead of being the victim of automation, these people adapted. They bought automation, took control of it, and operated it for their own benefit.

Key point: this entrepreneur is now harvesting the benefits of automation, rather than being systematically marginalized by it. Another noteworthy aspect of this article is that local-scale "appropriate" automation serves to reduce the scale advantages of the big players. The availability of small-scale machines that enable efficiencies comparable to the big guys is a huge problem. Most of the machines made for small-scale operators like this are manufactured in China, or India or Iran or Russia, Italy where industrial consolidation (scale) hasn't squashed the little players yet.

Suppose you're a grain farmer, but only have 50 acres (not 100s or 1000s like the big guys). You need a combine – that's a big machine that cuts grain stalk and separate grain from stalk (threshing). This cut/thresh function is terribly labor intensive, the combine is a must-have. Right now, there is no small-size ($50K or less) combine manufactured in the U.S., to my knowledge. They cost upwards of $200K, and sometimes a great deal more. The 50-acre farmer can't afford $200K (plus maint costs), and therefore can't farm at that scale, and has to sell out.

So, the design, production, and sales of these sort of small-scale, high-productivity machines is what is needed to re-distribute production (organically, not by revolution, thanks) back into the hands of the middle class.

If we make possible for the middle class to capture the benefits of automation, and you solve 1) the social dilemmas of concentration of wealth, 2) the declining std of living of the mid- and lower-class, and 3) have a chance to re-design an economy (business processes and collaborating suppliers to deliver end-user product/service) that actually fixes the planet as we make our living, instead of degrading it at every ka-ching of the cash register.

Point 3 is the most important, and this isn't the time or place to expand on that, but I hope others might consider it a bit.

marcel , October 25, 2019 at 12:07 pm

Regarding the combine, I have seen them operating on small-sized lands for the last 50 years. Without exception, you have one guy (sometimes a farmer, often not) who has this kind of harvester, works 24h a day for a week or something, harvesting for all farmers in the neighborhood, and then moves to the next crop (eg corn). Wintertime is used for maintenance. So that one person/farm/company specializes in these services, and everybody gets along well.

Tom Pfotzer , October 25, 2019 at 2:49 pm

Marcel – great solution to the problem. Choosing the right supplier (using combine service instead of buying a dedicated combine) is a great skill to develop. On the flip side, the fellow that provides that combine service probably makes a decent side-income from it. Choosing the right service to provide is another good skill to develop.

Jesper , October 25, 2019 at 5:59 pm

One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active.

What would happen if efficiency gains continued to be distributed according to the market? According to the relative bargaining power of the market participants where one side, the public good as represented by government, is asking for and therefore getting almost nothing?

As is, I do believe that people who are concerned do have reason to be concerned.

Kent , October 25, 2019 at 11:33 am

"Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine.

Brooklin Bridge , October 25, 2019 at 12:02 pm

Some people make a living saying these sorts of things about automation. The quality of French bread is simply not what it used to be (at least harder to find) though that is a complicated subject having to do with flour and wheat as well as human preparation and many other things and the cost (in terms of purchasing power), in my opinion, has gone up, not down since the 70's.

As some might say, "It's complicated," but automation does (not sure about "has to") come with trade offs in quality while price remains closer to what an ever more sophisticated set of algorithms say can be "gotten away with."

This may be totally different for cars or other things, but the author chose French bread and the only overall improvement, or even non change, in quality there has come, if at all, from the dark art of marketing magicians.

Brooklin Bridge , October 25, 2019 at 12:11 pm

/ from the dark art of marketing magicians, AND people's innate ability to accept/be unaware of decreases in quality/quantity if they are implemented over time in small enough steps.

Michael , October 25, 2019 at 1:47 pm

You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. But you're unlikely to find it where tourists might wander: the rent is too high.

As a general rule, if the bakers have a large staff or speak English you're probably in the wrong bakery. Except for one of my favorites where she learned her English watching every episode of Friends multiple times and likes to practice with me, though that's more of a fluke.

Brooklin Bridge , October 25, 2019 at 3:11 pm

It's a difficult subject to argue. I suspect that comparatively speaking, French bread remains good and there are still bakers who make high quality bread (given what they have to work with). My experience when talking to family in France (not Paris) is that indeed, they are in general quite happy with the quality of bread and each seems to know a bakery where they can still get that "je ne sais quoi" that makes it so special.

I, on the other hand, who have only been there once every few years since the 70's, kind of like once every so many frames of the movie, see a lowering of quality in general in France and of flour and bread in particular though I'll grant it's quite gradual.

The French love food and were among the best farmers in the world in the 1930s and have made a point of resisting radical change at any given point in time when it comes to the things they love (wine, cheese, bread, etc.) , so they have a long way to fall, and are doing so slowly; but gradually, it's happening.

I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization.

Oregoncharles , October 26, 2019 at 12:58 am

I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV.

I thought, but I don't think I said very articulately, that of course, they thought of themselves as craftspeople – making people's food, after all. It was a fundamental culture clash. All that was 50 years ago; looks like the European attitude has been receding.

Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question.

Carolinian , October 25, 2019 at 12:42 pm

I have been touring around by car and was surprised to see that all Oregon gas stations are full serve with no self serve allowed (I vaguely remember Oregon Charles talking about this). It applies to every station including the ones with a couple of dozen pumps like we see back east. I have since been told that this system has been in place for years.

It's hard to see how this is more efficient and in fact just the opposite as there are fewer attendants than waiting customers and at a couple of stations the action seemed chaotic. Gas is also more expensive although nothing could be more expensive than California gas (over $5/gal occasionally spotted). It's also unclear how this system was preserved–perhaps out of fire safety concerns–but it seems unlikely that any other state will want to imitate just as those bakeries aren't going to bring back their wood fired ovens.

JohnnyGL , October 25, 2019 at 1:40 pm

I think NJ is still required to do all full-serve gas stations. Most in MA have only self-serve, but there's a few towns that have by-laws requiring full-serve.

Brooklin Bridge , October 25, 2019 at 2:16 pm

I'm not sure just how much I should be jumping up and down about our ability to get more gasoline into our cars quicker. But convenient for sure.

The Observer , October 25, 2019 at 4:33 pm

In the 1980s when self-serve gas started being implemented, NIOSH scientists said oh no, now 'everyone' will be increasingly exposed to benzene while filling up. Benzene is close to various radioactive elements in causing damage and cancer.

Oregoncharles , October 26, 2019 at 1:06 am

It was preserved by a series of referenda; turns out it's a 3rd rail here, like the sales tax. The motive was explicitly to preserve entry-level jobs while allowing drivers to keep the gas off their hands. And we like the more personal quality.

Also, we go to states that allow self-serve and observe that the gas isn't any cheaper. It's mainly the tax that sets the price, and location.

There are several bakeries in this area with wood-fired ovens. They charge a premium, of course. One we love is way out in the country, in Falls City. It's a reason to go there.

shinola , October 25, 2019 at 12:47 pm

Unless I misunderstood, the author of this article seems to equate mechanization/automation of nearly any type with robotics.

"Is the cash register really a robot? James Ritty, who invented it, didn't think so;" – Nor do I.

To me, "robot" implies a machine with a high degree of autonomy. Would the author consider an old fashioned manual typewriter or adding machine (remember those?) to be robotic? How about when those machines became electrified?

I think the author uses the term "robot" over broadly.

Dan , October 25, 2019 at 1:05 pm

Agree. Those are just electrified extensions of the lever or sand timer. It's the "thinking" that is A.I.

Refuse to allow A.I.to destroy jobs and cheapen our standard of living. Never interact with a robo call, just hang up. Never log into a website when there is a human alternative. Refuse to do business with companies that have no human alternative. Never join a medical "portal" of any kind, demand to talk to medical personnel. Etc.

Sabotage A.I. whenever possible. The Ten Commandments do not apply to corporations.

https://medium.com/@TerranceT/im-never-going-to-stop-stealing-from-the-self-checkout-22cbfff9919b

Sancho Panza , October 25, 2019 at 1:52 pm

During a Chicago hotel stay my wife ordered an extra bath towel from the front desk. About 5 minutes later, a mini version of R2D2 rolled up to her door with towel in tow. It was really cute and interacted with her in a human-like way. Cute but really scary in the way that you indicate in your comment.

It seems many low wage activities would be in immediate risk of replacement. But sabotage? I would never encourage sabotage; in fact, when it comes to true robots like this one, I would highly discourage any of the following: yanking its recharge cord in the middle of the night, zapping it with a car battery, lift its payload and replace with something else, give it a hip high-five to help it calibrate its balance, and of course, the good old kick'm in the bolts.

Sancho Panza , October 26, 2019 at 9:53 am

Here's a clip of that robot, Leo, bringing bottled water and a bath towel to my wife.
https://www.youtube.com/watch?v=TXygNznHSs0

Barbara , October 26, 2019 at 11:48 am

Stop and Shop supermarket chain now has robots in the store. According to Stop and Shop they are oh so innocent! and friendly! why don't you just go up and say hello?

All the robots do, they say, go around scanning the shelves looking for: shelf price tags that don't match the current price, merchandise in the wrong place (that cereal box you picked up in the breakfast aisle and decided, in the laundry aisle, that you didn't want and put the box on a shelf with detergent.) All the robots do is notify management of wrong prices and misplaced merchandise.

The damn robot is cute, perky lit up eyes and a smile – so why does it remind me of the Stepford Wives.

S&S is the closest supermarket near me, so I go there when I need something in a hurry, but the bulk of my shopping is now done elsewhere. Thank goodness there are some stores that are not doing this: The area Shoprites and FoodTown's don't – and they are all run by family businesses. Shoprite succeeds by have a large assortment brands in every grocery category and keeping prices really competitive. FoodTown operates at a higher price and quality level with real butcher and seafood counters as well as prepackaged assortments in open cases and a cooked food counter of the most excellent quality with the store's cooks behind the counter to serve you and answer questions. You never have to come home from work tired and hungry and know that you just don't want to cook and settle for a power bar.

Carolinian , October 25, 2019 at 1:11 pm

A robot is a machine -- especially one programmable by a computer -- capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded

https://en.wikipedia.org/wiki/Robot

Those early cash registers were perhaps an early form of analog computer. But Wiki reminds that the origin of the term is a work of fiction.

The term comes from a Czech word, robota, meaning "forced labor";the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by the Czech writer, Karel Čapek

shinola , October 25, 2019 at 4:26 pm

Perhaps I didn't qualify "autonomous" properly. I didn't mean to imply a 'Rosie the Robot' level of autonomy but the ability of a machine to perform its programmed task without human intervention (other than switching on/off or maintenance & adjustments).

If viewed this way, an adding machine or typewriter are not robots because they require constant manual input in order to function – if you don't push the keys, nothing happens. A computer printer might be considered robotic because it can be programmed to function somewhat autonomously (as in print 'x' number of copies of this document).

"Robotics" is a subset of mechanized/automated functions.

Stephen Gardner , October 25, 2019 at 4:48 pm

When I first got out of grad school I worked at United Technologies Research Center where I worked in the robotics lab. In general, at least in those days, we made a distinction between robotics and hard automation. A robot is programmable to do multiple tasks and hard automation is limited to a single task unless retooled. The machines the author is talking about are hard automation. We had ASEA robots that could be programmed to do various things. One of ours drilled, riveted and sealed the skin on the horizontal stabilators (the wing on the tail of a helicopter that controls pitch) of a Sikorsky Sea Hawk.

The same robot with just a change of the fixture on the end could be programmed to paint a car or weld a seam on equipment. The drilling and riveting robot was capable of modifying where the rivets were placed (in the robot's frame of reference) based on the location of precisely milled blocks build into the fixture that held the stabilator.

There was always some variation and it was important to precisely place the rivets because the spars were very narrow (weight at the tail is bad because of the lever arm). It was considered state of the art back in the day but now auto companies have far more sophisticated robotics.

Socal Rhino , October 25, 2019 at 1:44 pm

But what happens when the bread machine is connected to the internet, can't function without an active internet connection, and requires an annual subscription to use?

That is the issue to me: however we define the tools, who will own them?

The Rev Kev , October 25, 2019 at 6:53 pm

You know, that is quite a good point that. It is not so much the automation that is the threat as the rent-seeking that anything connected to the internet allows to be implemented.

*_* , October 25, 2019 at 2:28 pm

Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits.

breadbaker , October 25, 2019 at 2:29 pm

The story about automation not worsening the quality of bread is not exactly true. Bakers had to develop and incorporate a new method called autolyze ( https://www.kingarthurflour.com/blog/2017/09/29/using-the-autolyse-method ) in the mid-20th-century to bring back some of the flavor lost with modern baking. There is also a trend of a new generation of bakeries that use natural yeast, hand shaping and kneading to get better flavors and quality bread.

But it is certainly true that much of the automation gives almost as good quality for much lower labor costs.

Tom Pfotzer , October 25, 2019 at 3:05 pm

On the subject of the machine-robot continuum

When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot".

How you implement a, b, and c above can have more or less sophistication, depending upon the complexity, variability, etc. of the environment, or the solutions, or the means used to affect the environment.

A machine, like a typewriter, or a lawn-mower engine has the logic expressed in metal; it's static.

The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate.

And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble.

:)

Phacops , October 25, 2019 at 3:08 pm

Sometimes automation is necessary to eliminate the risks of manual processes. There are parenteral (injectable) drugs that cannot be sterilized except by filtration. Most of the work of filling, post filling processing, and sealing is done using automation in areas that make surgical suites seem filthy and people are kept from these operations.

Manual operations are only undertaken to correct issues with the automation and the procedures are tested to ensure that they do not introduce contamination, microbial or otherwise. Because even one non-sterile unit is a failure and testing is destructive process, of course any full lot of product cannot be tested to state that all units are sterile. Periodic testing of the automated process and manual intervention is done periodically and it is expensive and time consuming to test to a level of confidence that there is far less than a one in a million chance of any unit in a lot being non sterile.

In that respect, automation and the skills necessary to interface with it are fundamental to the safety of drugs frequently used on already compromised patients.

Brooklin Bridge , October 25, 2019 at 3:27 pm

Agree. Good example. Digital technology and miniaturization seem particularly well suited to many aspect of the medical world. But doubt they will eliminate the doctor or the nurse very soon. Insurance companies on the other hand

lyman alpha blob , October 25, 2019 at 8:34 pm

Bill Burr has some thoughts on self checkouts and the potential bonanza for shoppers – https://www.youtube.com/watch?v=FxINJzqzn4w

TG , October 26, 2019 at 11:51 am

"There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost." WRONG! If you had an unlimited supply of 50-cents-an-hour disposable labor, mixing and kneading the dough by hand would be cheaper. It is only because labor is expensive in France that the machine saves money.

In Japan there is a lot of automation, and wages and living standards are high. In Bangladesh there is very little automation, and wages and livings standards are very low.

Are we done with the 'automation is destroying jobs' meme yet? Excessive population growth is the problem, not robots. And the root cause of excessive population growth is the corporate-sponsored virtual taboo of talking about it seriously.

[Feb 18, 2020] Articles on Linux by Ken Hess

Jul 13, 2019 | www.linuxtoday.com

[Feb 18, 2020] Setup Local Yum Repository On CentOS 7

Aug 27, 2014 | www.unixmen.com

This tutorial describes how to setup a local Yum repository on CentOS 7 system. Also, the same steps should work on RHEL and Scientific Linux 7 systems too.

If you have to install software, security updates and fixes often in multiple systems in your local network, then having a local repository is an efficient way. Because all required packages are downloaded over the fast LAN connection from your local server, so that it will save your Internet bandwidth and reduces your annual cost of Internet.

In this tutorial, I use two systems as described below:

Yum Server OS         : CentOS 7 (Minimal Install)
Yum Server IP Address : 192.168.1.101
Client OS             : CentOS 7 (Minimal Install)
Client IP Address     : 192.168.1.102
Prerequisites

First, mount your CentOS 7 installation DVD. For example, let us mount the installation media on /mnt directory.

mount /dev/cdrom /mnt/

Now the CentOS installation DVD is mounted under /mnt directory. Next install vsftpd package and let the packages available over FTP to your local clients.

To do that change to /mnt/Packages directory:

cd /mnt/Packages/

Now install vsftpd package:

rpm -ivh vsftpd-3.0.2-9.el7.x86_64.rpm

Enable and start vsftpd service:

systemctl enable vsftpd
systemctl start vsftpd

We need a package called "createrepo" to create our local repository. So let us install it too.

If you did a minimal CentOS installation, then you might need to install the following dependencies first:

rpm -ivh libxml2-python-2.9.1-5.el7.x86_64.rpm 
rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm 
rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm

Now install "createrepo" package:

rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm
Build Local Repository

It's time to build our local repository. Create a storage directory to store all packages from CentOS DVD's.

As I noted above, we are going to use a FTP server to serve all packages to client systems. So let us create a storage location in our FTP server pub directory.

mkdir /var/ftp/pub/localrepo

Now, copy all the files from CentOS DVD(s) i.e from /mnt/Packages/ directory to the "localrepo" directory:

cp -ar /mnt/Packages/*.* /var/ftp/pub/localrepo/

Again, mount the CentOS installation DVD 2 and copy all the files to /var/ftp/pub/localrepo directory.

Once you copied all the files, create a repository file called "localrepo.repo" under /etc/yum.repos.d/ directory and add the following lines into the file. You can name this file as per your liking:

vi /etc/yum.repos.d/localrepo.repo

Add the following lines:

[localrepo]
name=Unixmen Repository
baseurl=file:///var/ftp/pub/localrepo
gpgcheck=0
enabled=1

Note: Use three slashes(///) in the baseurl.

Now, start building local repository:

createrepo -v /var/ftp/pub/localrepo/

Now the repository building process will start.

Sample Output:

root@server:-mnt-Packages_002

Now, list out the repositories using the following command:

yum repolist

Sample Output:

repo id                                                                    repo name                                                                     status
base/7/x86_64                                                              CentOS-7 - Base                                                               8,465
extras/7/x86_64                                                            CentOS-7 - Extras                                                                30
localrepo                                                                  Unixmen Repository                                                            3,538
updates/7/x86_64                                                           CentOS-7 - Updates                                                              726

Clean the Yum cache and update the repository lists:

yum clean all
yum update

After creating the repository, disable or rename the existing repositories if you only want to install packages from the local repository itself.

Alternatively, you can install packages only from the local repository by mentioning the repository as shown below.

yum install --disablerepo="*" --enablerepo="localrepo" httpd

Sample Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================================
 Package                              Arch                            Version                                         Repository                          Size
===============================================================================================================================================================
Installing:
 httpd                                x86_64                          2.4.6-17.el7.centos.1                           localrepo                          2.7 M
Installing for dependencies:
 apr                                  x86_64                          1.4.8-3.el7                                     localrepo                          103 k
 apr-util                             x86_64                          1.5.2-6.el7                                     localrepo                           92 k
 httpd-tools                          x86_64                          2.4.6-17.el7.centos.1                           localrepo                           77 k
 mailcap                              noarch                          2.1.41-2.el7                                    localrepo                           31 k

Transaction Summary
===============================================================================================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]:

Disable Firewall And SELinux:

As we are going to use the local repository only in our local area network, there is no need for firewall and SELinux. So, to reduce the complexity, I disabled both Firewalld and SELInux.

To disable the Firewalld, enter the following commands:

systemctl stop firewalld
systemctl disable firewalld

To disable SELinux, edit file /etc/sysconfig/selinux ,

vi /etc/sysconfig/selinux

Set SELINUX=disabled.

[...]
SELINUX=disabled
[...]

Reboot your server to take effect the changes.

Client Side Configuration

Now, go to your client systems. Create a new repository file as shown above under /etc/yum.repos.d/ directory.

vi /etc/yum.repos.d/localrepo.repo

and add the following contents:

[localrepo]
name=Unixmen Repository
baseurl=ftp://192.168.1.101/pub/localrepo
gpgcheck=0
enabled=1

Note: Use double slashes in the baseurl and 192.168.1.101 is yum server IP Address.

Now, list out the repositories using the following command:

yum repolist

Clean the Yum cache and update the repository lists:

yum clean all
yum update

Disable or rename the existing repositories if you only want to install packages from the server local repository itself.

Alternatively, you can install packages from the local repository by mentioning the repository as shown below.

yum install --disablerepo="*" --enablerepo="localrepo" httpd

Sample Output:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-17.el7.centos.1 will be installed
--> Processing Dependency: httpd-tools = 2.4.6-17.el7.centos.1 for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-17.el7.centos.1.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-17.el7.centos.1 will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package          Arch        Version                      Repository      Size
================================================================================
Installing:
 httpd            x86_64      2.4.6-17.el7.centos.1        localrepo      2.7 M
Installing for dependencies:
 apr              x86_64      1.4.8-3.el7                  localrepo      103 k
 apr-util         x86_64      1.5.2-6.el7                  localrepo       92 k
 httpd-tools      x86_64      2.4.6-17.el7.centos.1        localrepo       77 k
 mailcap          noarch      2.1.41-2.el7                 localrepo       31 k

Transaction Summary
================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 3.0 M
Installed size: 10 M
Is this ok [y/d/N]: y
Downloading packages:
(1/5): apr-1.4.8-3.el7.x86_64.rpm                          | 103 kB   00:01     
(2/5): apr-util-1.5.2-6.el7.x86_64.rpm                     |  92 kB   00:01     
(3/5): httpd-tools-2.4.6-17.el7.centos.1.x86_64.rpm        |  77 kB   00:00     
(4/5): httpd-2.4.6-17.el7.centos.1.x86_64.rpm              | 2.7 MB   00:00     
(5/5): mailcap-2.1.41-2.el7.noarch.rpm                     |  31 kB   00:01     
--------------------------------------------------------------------------------
Total                                              1.0 MB/s | 3.0 MB  00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : apr-1.4.8-3.el7.x86_64                                       1/5 
  Installing : apr-util-1.5.2-6.el7.x86_64                                  2/5 
  Installing : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     3/5 
  Installing : mailcap-2.1.41-2.el7.noarch                                  4/5 
  Installing : httpd-2.4.6-17.el7.centos.1.x86_64                           5/5 
  Verifying  : mailcap-2.1.41-2.el7.noarch                                  1/5 
  Verifying  : httpd-2.4.6-17.el7.centos.1.x86_64                           2/5 
  Verifying  : apr-util-1.5.2-6.el7.x86_64                                  3/5 
  Verifying  : apr-1.4.8-3.el7.x86_64                                       4/5 
  Verifying  : httpd-tools-2.4.6-17.el7.centos.1.x86_64                     5/5 

Installed:
  httpd.x86_64 0:2.4.6-17.el7.centos.1                                          

Dependency Installed:
  apr.x86_64 0:1.4.8-3.el7                      apr-util.x86_64 0:1.5.2-6.el7   
  httpd-tools.x86_64 0:2.4.6-17.el7.centos.1    mailcap.noarch 0:2.1.41-2.el7   

Complete!

That's it. Now, you will be able to install softwares from your server local repository.

Cheers!

[Feb 16, 2020] Recover deleted files in Debian with TestDisk

Images deletes; see the original link for details
Feb 16, 2020 | vitux.com

... ... ...

You can verify if the utility is indeed installed on your system and also check its version number by using the following command:

$ testdisk --version

Or,

$ testdisk -v

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png" alt="Check TestDisk version" width="734" height="216" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png 734w, https://vitux.com/wp-content/uploads/2019/10/word-image-64-300x88.png 300w" sizes="(max-width: 734px) 100vw, 734px" />

Step 2: Run TestDisk and create a new testdisk.log file

Use the following command in order to run the testdisk command line utility:

$ sudo testdisk

The output will give you a description of the utility. It will also let you create a testdisk.log file. This file will later include useful information about how and where your lost file was found, listed and resumed.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png" alt="Using Testdisk" width="736" height="411" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-65-300x168.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The above output gives you three options about what to do with this file:

Create: (recommended)- This option lets you create a new log file.

Append: This option lets you append new information to already listed information in this file from any previous session.

No Log: Choose this option if you do not want to record anything about the session for later use.

Important: TestDisk is a pretty intelligent tool. It does know that many beginners will also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can select an option through the up and down arrow keys and then entering to make your choice.

In the above output, I would opt for creating a new log file. The system might ask you the password for sudo at this point.

Step 3: Select your recovery drive

The utility will now display a list of drives attached to your system. In my case, it is showing my hard drive as it is the only storage device on my system.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png" alt="Choose recovery drive" width="729" height="493" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png 729w, https://vitux.com/wp-content/uploads/2019/10/word-image-66-300x203.png 300w" sizes="(max-width: 729px) 100vw, 729px" />

Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.

Step 4: Select Partition Table Type of your Selected Drive

Now that you have selected a drive, you need to specify its partition table type of your on the following screen:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png" alt="Choose partition table" width="736" height="433" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-67-300x176.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The utility will automatically highlight the correct choice. Press Enter to continue.

If you are sure that the testdisk intelligence is incorrect, you can make the correct choice from the list and then hit Enter.

Step 5: Select the 'Advanced' option for file recovery

When you have specified the correct drive and its partition type, the following screen will appear:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png" alt="Advanced file recovery options" width="736" height="446" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-68-300x182.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

Recovering lost files is only one of the features of testdisk, the utility offers much more than that. Through the options displayed in the above screenshot, you can select any of those features. But here we are interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit enter.

In this utility if you reach a point you did not intend to, you can go back by using the q key.

Step 6: Select the drive partition where you lost the file

If your selected drive has multiple partitions, the following screen lets you choose the relevant one from them.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the options shown at the bottom of the screen.

This will list all the directories on your partition.

Step 7: Browse to the directory from where you lost the file

When the testdisk utility displays all the directories of your operating system, browse to the directory from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home directory. So I will browse to home:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png" alt="Select directory" width="733" height="458" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png 733w, https://vitux.com/wp-content/uploads/2019/10/word-image-70-300x187.png 300w" sizes="(max-width: 733px) 100vw, 733px" />

My username (sana):

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png" alt="Choose user folder" width="735" height="449" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-71-300x183.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

And then the Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png" alt="Choose downloads" width="738" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png 738w, https://vitux.com/wp-content/uploads/2019/10/word-image-72-300x185.png 300w" sizes="(max-width: 738px) 100vw, 738px" />

Tip: You can use the left arrow to go back to the previous directory.

When you have reached your required directory, you will see the deleted files in colored or highlighted form.

And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this as I had to illustrate the whole process to you.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png" alt="Highlighted files" width="735" height="498" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-73-300x203.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

Step 8: Copy the deleted file to be restored

By now, you must have found your lost file in the list. Use the C option to copy the selected file. This file will later be restored to the location you will specify in the next step:

Step 9: Specify the location where the found file will be restored

Now that we have copied the lost file that we have now found, the testdisk utility will display the following screen so that we can specify where to restore it.

You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your desired location.

I am specifically selecting the location from where I lost the file, my Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png" alt="Choose location to restore file" width="732" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png 732w, https://vitux.com/wp-content/uploads/2019/10/word-image-74-300x187.png 300w" sizes="(max-width: 732px) 100vw, 732px" />

Step 10: Copy/restore the file to the selected location

After making the selection about where you want to restore the file, click the C button. This will restore your file to that location:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png" alt="Restored file successfully" width="735" height="496" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-75-300x202.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

See the text in green in the above screenshot? This is actually great news. Now my file is restored on the specified location.

This might seem to be a slightly long process but it is definitely worth getting your lost file back. The restored file will most probably be in a locked state. This means that only an authorized user can access and open it.

We all need this tool time and again, but if you want to delete it till you further need it you can do so through the following command:

$ sudo apt-get remove testdisk

You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!

Recover deleted files in Debian with TestDisk Karim Buzdar February 11, 2020 Debian , Linux , Shell Market smarter with automated messaging tools. ads via Carbon Search About This Site Vitux.com aims to become a Linux compendium with lots of unique and up to date tutorials. Most Popular Copyright © vitux.com

[Feb 16, 2020] A List Of Useful Console Services For Linux Users by sk

Images deletes; see the original link for details
Feb 13, 2020 | www.ostechnix.com
Cheatsheets for Linux/Unix commands

You probably heard about cheat.sh . I use this service everyday! This is one of the useful service for all Linux users. It displays concise Linux command examples.

For instance, to view the curl command cheatsheet , simply run the following command from your console:

$ curl cheat.sh/curl

It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you the cheatsheets of most Linux and unix commands in couple seconds.

ls command cheatsheet:

$ curl cheat.sh/ls

find command cheatsheet:

$ curl cheat.sh/find

It is highly recommended tool!


Recommended read:


... ... ...

IP Address

We can find the local ip address using ip command. But what about the public IP address? It is simple!

To find your public IP address, just run the following commands from your Terminal:

$ curl ipinfo.io/ip
157.46.122.176
$ curl eth0.me
157.46.122.176
$ curl checkip.amazonaws.com
157.46.122.176
$ curl icanhazip.com
2409:4072:631a:c033:cc4b:4d25:e76c:9042

There is also a console service to display the ip address in JSON format.

$ curl httpbin.org/ip
{
  "origin": "157.46.122.176"
}

... ... ...

Dictionary

Want to know the meanig of an English word? Here is how you can get the meaning of a word – gustatory

$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <100411284.5191.1581597016@pan.alephnull.com>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing

You can share texts via some console services. These text sharing services are often useful for sharing code.

Here is an example.

$ echo "Welcome To OSTechNix!" | curl -F 'f:1=<-' ix.io
http://ix.io/2bCA

The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser by navigating to the URL – http://ix.io/2bCA

Another example:

$ echo "Welcome To OSTechNix!" | curl -F file=@- 0x0.st
http://0x0.st/i-0G.txt
File sharing

Not just text, we can even share files to anyone using a console service called filepush .

$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    72    0     0  100    72      0     54  0:00:01  0:00:01 --:--:--    54http://filepush.co/8x6h/ostechnix.txt
100   110  100    38  100    72     27     53  0:00:01  0:00:01 --:--:--    81

The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating to the link – http://filepush.co/8x6h/ostechnix.txt

Another text sharing console service is termbin :

$ echo "Welcome To OSTechNix!" | nc termbin.com 9999

There is also another console service named transfer.sh . But it doesn't work at the time of writing this guide.

Browser

There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using command:

$ ssh brow.sh

Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer the following guide for more details.

Create QR codes for given string

Do you want to create QR-codes for a given string? That's easy!

$ curl qrenco.de/ostechnix

Here is the QR code for "ostechnix" string.

URL Shortners

Want to shorten a long URLs shorter to make them easier to post or share with your friends? Use Tinyurl console service to shorten them:

$ curl -s http://tinyurl.com/api-create.php?url=https://www.ostechnix.com/pigz-compress-and-decompress-files-in-parallel-in-linux/
http://tinyurl.com/vkc5c5p

[Feb 14, 2020] The trouble with Artificial Intelligence

Feb 14, 2020 | www.moonofalabama.org

Hoarsewhisperer , Feb 12 2020 6:36 utc | 43

Posted by: juliania | Feb 12 2020 5:15 utc | 39
(Artificial Intelligence)

The trouble with Artificial Intelligence is that it's not intelligent.
And it's not intelligent because it's got no experience, no imagination and no self-control.

[Feb 09, 2020] How To Install And Configure Chrony As NTP Client

See also chrony – Comparison of NTP implementations
Another installation manual Steps to configure Chrony as NTP Server & Client (CentOS-RHEL 8)
Feb 09, 2020 | www.2daygeek.com

It can synchronize the system clock faster with better time accuracy and it can be very much useful for the systems which are not online all the time.

Chronyd is smaller in size, it uses less system memory and it wakes up the CPU only when necessary, which is better for power saving.

It can perform well even when the network is congested for longer periods of time.

You can use any of the below commands to check Chrony status.

To check chrony tracking status.

# chronyc tracking

Reference ID    : C0A80105 (CentOS7.2daygeek.com)
Stratum         : 3
Ref time (UTC)  : Thu Mar 28 05:57:27 2019
System time     : 0.000002545 seconds slow of NTP time
Last offset     : +0.001194361 seconds
RMS offset      : 0.001194361 seconds
Frequency       : 1.650 ppm fast
Residual freq   : +184.101 ppm
Skew            : 2.962 ppm
Root delay      : 0.107966967 seconds
Root dispersion : 1.060455322 seconds
Update interval : 2.0 seconds
Leap status     : Normal

Run the sources command to displays information about the current time sources.

# chronyc sources

210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* CentOS7.2daygeek.com          2   6    17    62    +36us[+1230us] +/- 1111ms

[Feb 05, 2020] How to disable startup graphic in CentOS

Feb 05, 2020 | forums.centos.org

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.

<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help.

See also https://www.youtube.com/watch?v=oFl40XzlXp4

[Feb 05, 2020] Disable startup graphic

This is still a problem today... See also centOS 7 hung at "Starting Plymouth switch root service"
Feb 05, 2020 | forums.centos.org
disable startup graphic

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help. Top
User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/20 23:09:40

The file to amend now is /boot/grub2/grub.cfg and also /etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o /boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg file directly though your changes will be wiped out next kernel install if you don't also edit the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
neuronetv
Posts: 76
Joined: 2012/01/08 21:53:07
Re: disable startup graphic

Post by neuronetv " 2014/08/21 13:12:45

thanks for that, I did the edits and now the scroll is back. Top
larryg
Posts: 3
Joined: 2014/07/17 04:48:28
Re: disable startup graphic

Post by larryg " 2014/08/21 19:27:16

The preferred method to do this is using the command plymouth-set-default-theme.

If you enter this command, without parameters, as user root you'll see something like
>plymouth-set-default-theme
charge
details
text

This lists the themes installed on your computer. The default is 'charge'. If you want to see the boot up details you used to see in version 6, try
>plymouth-set-default-theme details

Followed by the command
>dracut -f

Then reboot.

This process modifies the boot loader so you won't have to update your grub.conf file manually everytime for each new kernel update.

There are numerous themes available you can download from CentOS or in general. Just google 'plymouth themes' to see other possibilities, if you're looking for graphics type screens. Top

User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/21 22:47:49

Editing /etc/default/grub to remove rhgb quiet makes it permanent too. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
MalAdept
Posts: 1
Joined: 2014/11/02 20:06:27
Re: disable startup graphic

Post by MalAdept " 2014/11/02 20:23:37

I tried both TrevorH's and LarryG's methods, and LarryG wins.

Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want, but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put "rhgb quiet" back in and got my 1920x1080 back.

Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting without loss of display resolution. Thanks LarryG! Top

dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 00:17:18

I have used this mod to get back the details for grub boot, thanks to all for that info.

However when I am watching it fills the page and then rather than scrolling up as it did in V5 it blanks and starts again at the top. Of course there is FAIL message right before it blanks :lol: that I want to see and I can't slam the Scroll Lock fast enough to catch it. Anyone know how to get the details to scroll up rather than the blank and re-write?

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 09:15:51

Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure. Top
dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 14:18:29

aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure.
Thanks for the followup aks. Actually I have found that the Scroll Lock does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it before the screen blanks and then starts writing again. What I am really wondering is how to get the screen to scroll up when it gets to the bottom of the screen rather than blanking and starting to write again at the top. That is annoying! :x

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 19:14:29

Yes it is and no you can't. Kudos to Lennard for making or lives so much shitter....

[Feb 05, 2020] How do deactivate plymouth boot screen?

Jan 01, 2012 | askubuntu.com

Ask Question Asked 8 years ago Active 7 years, 7 months ago Viewed 57k times


> ,

11

Jo-Erlend Schinstad , 2012-01-25 22:06:57

Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using Precise, but I suspect a solution for 11.10 would work just as well.

WinEunuuchs2Unix , 2017-07-21 22:08:06

Did you try: sudo update-initramfs – mgajda Jun 19 '12 at 0:54

> ,

17

Panther ,

Easiest quick fix is to edit the grub line as you boot.

Hold down the shift key so you see the menu. Hit the e key to edit

Edit the 'linux' line, remove the 'quiet' and 'splash'

To disable it in the long run

Edit /etc/default/grub

Change the line – GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to

GRUB_CMDLINE_LINUX_DEFAULT=""

And then update grub

sudo update-grub

Panther , 2016-10-27 15:43:04

Removing quiet and splash removes the splash, but I still only have a purple screen with no text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at 22:25

Tuminoid ,

How about pressing CTRL+ALT+F2 for console allowing you to see whats going on.. You can go back to GUI/Plymouth by CTRL+ALT+F7 .

Don't have my laptop here right now, but IIRC Plymouth has upstart job in /etc/init , named plymouth???.conf, renaming that probably achieves what you want too more permanent manner.

Jānis Elmeris , 2013-12-03 08:46:54

No, there's nothing on the other consoles. – Jo-Erlend Schinstad Jan 25 '12 at 22:22

[Feb 01, 2020] Basic network troubleshooting in Linux with nmap Enable Sysadmin

Feb 01, 2020 | www.redhat.com

Determine this host's OS with the -O switch:

$ sudo nmap -O <Your-IP>

The results look like this:

....

[ You might also like: Six practical use cases for Nmap ]

Then, run the following to check the common 2000 ports, which handle the common TCP and UDP services. Here, -Pn is used to skip the ping scan after assuming that the host is up:

$ sudo nmap -sS -sU -PN <Your-IP>

The results look like this:

...

Note: The -Pn combo is also useful for checking if the host firewall is blocking ICMP requests or not.

Also, as an extension to the above command, if you need to scan all ports instead of only the 2000 ports, you can use the following to scan ports from 1-66535:

$ sudo nmap -sS -sU -PN -p 1-65535 <Your-IP>

The results look like this:

...

You can also scan only for TCP ports (default 1000) by using the following:

$ sudo nmap -sT <Your-IP>

The results look like this:

...

Now, after all of these checks, you can also perform the "all" aggressive scans with the -A option, which tells Nmap to perform OS and version checking using -T4 as a timing template that tells Nmap how fast to perform this scan (see the Nmap man page for more information on timing templates):

$ sudo nmap -A -T4 <Your-IP>

The results look like this, and are shown here in two parts:

...

There you go. These are the most common and useful Nmap commands. Together, they provide sufficient network, OS, and open port information, which is helpful in troubleshooting. Feel free to comment with your preferred Nmap commands as well.

[ Readers also liked: My 5 favorite Linux sysadmin tools ]

Related Stories:

[Jan 25, 2020] timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time

You can achieve the same affect with at command which allows more flexible time patterns.
Jan 23, 2020 | linuxize.com

timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time. In other words, timeout allows you to run a command with a time limit. The timeout command is a part of the GNU core utilities package which is installed on almost any Linux distribution.

It is handy when you want to run a command that doesn't have a built-in timeout option.

In this article, we will explain how to use the Linux timeout command.

How to Use the timeout Command #

The syntax for the timeout command is as follows:

timeout [OPTIONS] DURATION COMMAND [ARG]

The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:

When no unit is used, it defaults to seconds. If the duration is set to zero, the associated timeout is disabled.

The command options must be provided before the arguments.

Here are a few basic examples demonstrating how to use the timeout command:

If you want to run a command that requires elevated privileges such as tcpdump , prepend sudo before timeout :

sudo timeout 300 tcpdump -n -w data.pcap
Sending Specific Signal #

If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is reached. You can specify which signal to send using the -s ( --signal ) option.

For example, to send SIGKILL to the ping command after one minute you would use:

sudo timeout -s SIGKILL ping 8.8.8.8

The signal can be specified by its name like SIGKILL or its number like 9 . The following command is identical to the previous one:

sudo timeout -s 9 ping 8.8.8.8

To get a list of all available signals, use the kill -l command:

kill -l
Killing Stuck Processes #

SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes. In that situations, the process continues to run after the termination signal is send.

To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL signal to the managed program that cannot be caught or ignored.

In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after ten seconds:

sudo timeout -k 10 1m ping 8.8.8.8

timeout -k "./test.sh"

killed after the given time limit is reached

Preserving the Exit Status #

timeout returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command.

To return the exit status of the command even when the time limit is reached, use the --preserve-status option:

timeout --preserve-status 5 ping 8.8.8.8
Running in Foreground #

By default, timeout runs the managed command in the background. If you want to run the command in the foreground, use the --foreground option:

timeout --foreground 5m ./script.sh

This option is useful when you want to run an interactive command that requires user input.

Conclusion #

The timeout command is used to run a given command with a time limit.

timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only with two arguments, the duration, and the managed command.

If you have any questions or feedback, feel free to leave a comment.

timeout terminal

Related Tutorials

If you like our content, please consider buying us a coffee.
Thank you for your support!

Buy me a coffee

Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

Subscribe

We'll never share your email address or spam you.

Jan 25, 2020

Pidof Command in Linux
<img alt="" src=/post/pidof-command-in-linux/featured.jpg>

Jan 22, 2020

Tcpdump Command in Linux
<img alt="" src=/post/tcpdump-command-in-linux/featured.jpg>

Jan 17, 2020

Id command in Linux
<img alt="" src=/post/id-command-in-linux/featured.jpg>
Write a comment Please enable JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a> ESC © 2020 Linuxize.com Privacy Policy Contact <div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1" width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>

[Jan 16, 2020] Watch Command in Linux

Jan 16, 2020 | linuxhandbook.com

Last Updated on January 10, 2020 By Abhishek Leave a Comment

Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
watch [options] [command]
Watch command examples
Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?ssl=1" alt="Watch Command" srcset="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?w=800&amp;ssl=1 800w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=300%2C169&amp;ssl=1 300w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Using watch command without any options will use the default parameter of 2.0 second refresh intervals.

As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the free command . This will give you up to date information about our system's memory usage.

watch free

Yes, it is that simple my friends.

Every 2.0s: free                                pop-os: Wed Dec 25 13:47:59 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     3846372    25571572      676612     3178904    27702636
Swap:             0           0           0
Adjust refresh rate of watch command

You can easily change how quickly the output is updated using the -n flag.

watch -n 10 free
Every 10.0s: free                               pop-os: Wed Dec 25 13:58:32 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     4522508    24864196      715600     3210144    26988920
Swap:             0           0           0

This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our output.

Remove title or header info from watch command output
watch -t free

The -t flag removes the title/header information to clean up output. The information will still refresh every 2 seconds but you can change that by combining the -n option.

              total        used        free      shared  buff/cache   available
Mem:       32596848     3683324    25089268     1251908     3824256    27286132
Swap:             0           0           0
Highlight the changes in watch command output

You can add the -d option and watch will automatically highlight changes for us. Let's take a look at this using the date command. I've included a screen capture to show how the highlighting behaves.

Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch

You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of this software. Pipes rely on the | symbol. Not coincidentally, this is called a pipe symbol or sometimes a vertical bar symbol.

watch "cat /var/log/syslog | tail -n 3"

While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2 seconds and any changes will be displayed.

Every 2.0s: cat /var/log/syslog | tail -n 3                                                      pop-os: Wed Dec 25 15:18:06 2019

Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.

Conclusion

Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.

This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses that you would like to share, let us know about them in the comments.

[Jan 16, 2020] Linux tools How to use the ss command by Ken Hess (Red Hat)

ss is the Swiss Army Knife of system statistics commands. It's time to say buh-bye to netstat and hello to ss.
Jan 13, 2020 | www.redhat.com

If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and netstat . The new replacements are ip , dig , and ss , respectively. It's time to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth a mention here because part of netstat 's functionality has been replaced by ip . This article covers the essentials for the ss command so that you don't have to dig (no pun intended) for them.

More Linux resources

Formally, ss is the socket statistics command that replaces netstat . In this article, I provide netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it easy for us to transition into ss from netstat by making some of netstat 's options operate in much the same fashion in ss .

For example, to display TCP sockets, use the -t option:

$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 rhel8:ssh               khess-mac:62036         ESTABLISHED

$ ss -t
State         Recv-Q          Send-Q                    Local Address:Port                   Peer Address:Port          
ESTAB         0               0                          192.168.1.65:ssh                    192.168.1.94:62036

You can see that the information given is essentially the same, but to better mimic what you see in the netstat command, use the -r (resolve) option:

$ ss -tr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:ssh                             khess-mac:62036

And to see port numbers rather than their translations, use the -n option:

$ ss -ntr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:22                              khess-mac:62036

It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So, try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly surprised at the results.

For example, the netstat command with the old standby options -an yield comparable results (which are too long to show here in full):

$ netstat -an |grep LISTEN

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
unix  2      [ ACC ]     STREAM     LISTENING     28165    /run/user/0/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     20942    /var/lib/sss/pipes/private/sbus-dp_implicit_files.642
unix  2      [ ACC ]     STREAM     LISTENING     28174    /run/user/0/bus
unix  2      [ ACC ]     STREAM     LISTENING     20241    /var/run/lsm/ipc/simc
<truncated>

$ ss -an |grep LISTEN

u_str             LISTEN              0                    128                                             /run/user/0/systemd/private 28165                  * 0                   
                                                            
u_str             LISTEN              0                    128                   /var/lib/sss/pipes/private/sbus-dp_implicit_files.642 20942                  * 0                   
                                                            
u_str             LISTEN              0                    128                                                         /run/user/0/bus 28174                  * 0                   
                                                            
u_str             LISTEN              0                    5                                                     /var/run/lsm/ipc/simc 20241                  * 0                   
<truncated>

The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So, there are layout differences even though the displayed information is really the same.

If you're wondering which netstat commands have been replaced by the ip command, here's one for you:

$ netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      all-systems.mcast.net
enp0s3          1      all-systems.mcast.net
lo              1      ff02::1
lo              1      ff01::1
enp0s3          1      ff02::1:ffa6:ab3e
enp0s3          1      ff02::1:ff8d:912c
enp0s3          1      ff02::1
enp0s3          1      ff01::1

$ ip maddr
1:	lo
	inet  224.0.0.1
	inet6 ff02::1
	inet6 ff01::1
2:	enp0s3
	link  01:00:5e:00:00:01
	link  33:33:00:00:00:01
	link  33:33:ff:8d:91:2c
	link  33:33:ff:a6:ab:3e
	inet  224.0.0.1
	inet6 ff02::1:ffa6:ab3e
	inet6 ff02::1:ff8d:912c
	inet6 ff02::1
	inet6 ff01::1

The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can try this one for yourself to compare the two:

$ netstat -s 

Ip:
    Forwarding: 2
    6231 total packets received
    2 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    3104 incoming packets delivered
    2011 requests sent out
    243 dropped because of missing route
<truncated>

$ ss -s

Total: 182
TCP:   3 (estab 1, closed 0, orphaned 0, timewait 0)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  3         2         1        
TCP	  3         2         1        
INET	  7         4         3        
FRAG	  0         0         0

If you figure out how to display the same info with ss , please let me know.

Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.

What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're significantly improving an existing utility, why bother deprecating the other?

There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually embrace ss as its successor.

Want more on networking topics? Check out the Linux networking cheat sheet .

Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. More about me

[Jan 16, 2020] Thirteen Useful Tools for Working with Text on the Command Line - Make Tech Easier

Jan 16, 2020 | www.maketecheasier.com

Thirteen Useful Tools for Working with Text on the Command Line By Karl Wakim – Posted on Jan 9, 2020 Jan 9, 2020 in Linux Text Tool Linux Command Line Featured

GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used correctly.

Here are thirteen powerful text manipulation tools every command-line user should know.

1. cat

Cat was designed to con cat enate files but is most often used to display a single file. Without any arguments, cat reads standard input until Ctrl + D is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly specified with a - .

Cat has a number of useful options, notably:

In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.

cat -n file1 - file3
Linux Text Tools Cat
2. sort

As its name suggests, sort sorts file contents alphabetically and numerically.

Linux Text Tools Sort
3. uniq

Uniq takes a sorted file and removes duplicate lines. It is often chained with sort in a single command.

Linux Text Tools Uniq
4. comm

Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain lines unique to the first and second file respectively, and the third displays those found in both files.

Linux Text Tools Comm
5. cut

Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file or from standard input if no file is specified.

Cutting by character position

The -c option specifies a single character position or one or more ranges of characters.

For example:

Linux Text Tools Cut Char

Cutting by field

Fields are separated by a delimiter consisting of a single character, which is specified with the -d option. The -f option selects a field position or one or more ranges of fields using the same format as above.

Linux Text Tools Cut Field
6. dos2unix

GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It converts CRLF terminators to LF.

In the following example, the file command is used to check the text format before and after using dos2unix .

Linux Text Tools Dos2unix
7. fold

To make long lines of text easier to read and handle, you can use fold , which wraps lines to a specified width.

Fold strictly matches the specified width by default, breaking words where necessary.

fold -w 30 longline.txt
Linux Text Tools Fold

If breaking words is undesirable, you can use the -s option to break at spaces.

fold -w 30 -s longline.txt
Linux Text Tools Fold Spaces
8. iconv

This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.

iconv -f input_encoding -t output_encoding -o output_file input_file

Note: you can list the available encodings with iconv -l

9. sed

sed is a powerful and flexible s tream ed itor, most commonly used to find and replace strings with the following syntax.

The following command will read from the specified file (or standard input), replacing the parts of text that match the regular expression pattern with the replacement string and outputting the result to the terminal.

sed s/pattern/replacement/g filename

To modify the original file instead, you can use the -i flag.

Linux Text Tools Sed
10. wc

The wc utility prints the number of bytes, characters, words, or lines in a file.

Linux Text Tools Wc
11. split

You can use split to divide a file into smaller files, by number of lines, by size, or to a specific number of files.

Splitting by number of lines

split -l num_lines input_file output_prefix
Linux Text Tools Split Lines

Splitting by bytes

split -b bytes input_file output_prefix
Linux Text Tools Split Bytes

Splitting to a specific number of files

split -n num_files input_file output_prefix
Linux Text Tools Split Number
12. tac

Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.

Linux Text Tools Tac
13. tr

The tr tool is used to translate or delete sets of characters.

A set of characters is usually either a string or ranges of characters. For instance:

Refer to the tr manual page for more details.

To translate one set to another, use the following syntax:

tr SET1 SET2

For instance, to replace lowercase characters with their uppercase equivalent, you can use the following:

tr "a-z" "A-Z"
Linux Text Tools Tr

To delete a set of characters, use the -d flag.

tr -d SET
Linux Text Tools Tr D

To delete the complement of a set of characters (i.e. everything except the set), use -dc .

tr -dc SET
Linux Text Tools Tr Dc
Conclusion

There is plenty to learn when it comes to Linux command line. Hopefully, the above commands can help you to better deal with text in the command line.

[Jan 10, 2020] America's Hamster Wheel of 'Career Advancement' by Casey Chalk

Notable quotes:
"... Getting Work Right: Labor and Leisure in a Fragmented World ..."
"... The problem is further compounded by the fact that much of the labor Americans perform isn't actually good ..."
Jan 09, 2020 | www.theamericanconservative.com

We're told that getting ahead at work and reorienting our lives around our jobs will make us happy. So why hasn't it? Many of those who work in the corporate world are constantly peppered with questions about their " career progression ." The Internet is saturated with articles providing tips and tricks on how to develop a never-fail game plan for professional development. Millions of Americans are engaged in a never-ending cycle of résumé-padding that mimics the accumulation of Boy Scout merit badges or A's on report cards except we never seem to get our Eagle Scout certificates or academic diplomas. We're told to just keep going until we run out of gas or reach retirement, at which point we fade into the peripheral oblivion of retirement communities, morning tee-times, and long midweek lunches at beach restaurants.

The idealistic Chris McCandless in Jon Krakauer's bestselling book Into the Wild defiantly declares, "I think careers are a 20th century invention and I don't want one." Anyone who has spent enough time in the career hamster wheel can relate to this sentiment. Is 21st-century careerism -- with its promotion cycles, yearly feedback, and little wooden plaques commemorating our accomplishments -- really the summit of human existence, the paramount paradigm of human flourishing?

Michael J. Noughton, director of the Center for Catholic Studies at the University of St. Thomas, Minnesota, and board chair for Reel Precision Manufacturing, doesn't think so. In his Getting Work Right: Labor and Leisure in a Fragmented World , Noughton provides a sobering statistic: approximately two thirds of employees in the United States are "either indifferent or hostile to their work." That's not just an indicator of professional dissatisfaction; it's economically disastrous. The same survey estimates that employee disengagement is costing the U.S. economy "somewhere between 450-550 billion dollars annually."

The origin of this problem, says Naughton, is an error in how Americans conceive of work and leisure. We seem to err in one of two ways. One is to label our work as strictly a job, a nine-to-five that pays the bills. In this paradigm, leisure is an amusement, an escape from the drudgery of boring, purposeless labor. The other way is that we label our work as a career that provides the essential fulfillment in our lives. Through this lens, leisure is a utility, simply another means to serve our work. Outside of work, we exercise to maintain our health in order to work harder and longer. We read books that help maximize our utility at work and get ahead of our competitors. We "continue our education" largely to further our careers.

Whichever error we fall into, we inevitably end up dissatisfied. The more we view work as a painful, boring chore, the less effective we are at it, and the more complacent and discouraged. Our leisure activities, in turn, no matter how distracting, only compound our sadness, because no amount of games can ever satisfy our souls. Or, if we see our meaning in our work and leisure as only another means of increasing productivity, we inevitably burn out, wondering, perhaps too late in life, what exactly we were working for . As Augustine of Hippo noted, our hearts are restless for God. More recently, C.S. Lewis noted that we yearn to be fulfilled by something that nothing in this world can satisfy. We need both our work and our leisure to be oriented to the transcendent in order to give our lives meaning and purpose.

The problem is further compounded by the fact that much of the labor Americans perform isn't actually good . There are "bad goods" that are detrimental to society and human flourishing. Naughton suggests some examples: violent video games, pornography, adultery dating sites, cigarettes, high-octane alcohol, abortifacients, gambling, usury, certain types of weapons, cheat sheet websites, "gentlemen's clubs," and so on. Though not as clear-cut as the above, one might also add working for the kinds of businesses that contribute to the impoverishment or destruction of our communities, as Tucker Carlson has recently argued .

Why does this matter for professional satisfaction? Because if our work doesn't offer goods and services that contribute to our communities and the common good -- and especially if we are unable to perceive how our labor plays into that common good -- then it will fundamentally undermine our happiness. We will perceive our work primarily in a utilitarian sense, shrugging our shoulders and saying, "it's just a paycheck," ignoring or disregarding the fact that as rational animals we need to feel like our efforts matter.

Economic liberalism -- at least in its purest free-market expression -- is based on a paradigm with nominalist and utilitarian origins that promote "freedom of indifference." In rudimentary terms, this means that we need not be interested in the moral quality of our economic output. If we produce goods that satisfy people's wants, increasing their "utils," as my Econ 101 professor used to say, then we are achieving business success. In this paradigm, we desire an economy that maximizes access to free choice regardless of the content of that choice, because the more choices we have, the more we can maximize our utils, or sensory satisfaction.

The freedom of indifference paradigm is in contrast to a more ancient understanding of economic and civic engagement: a freedom for excellence. In this worldview, "we are made for something," and participation in public acts of virtue is essential both to our own well-being and that of our society. By creating goods and services that objectively benefit others and contributing to an order beyond the maximization of profit, we bless both ourselves and the polis . Alternatively, goods that increase "utils" but undermine the common good are rejected.

Returning to Naughton's distinction between work and leisure, we need to perceive the latter not as an escape from work or a means of enhancing our work, but as a true time of rest. This means uniting ourselves with the transcendent reality from which we originate and to which we will return, through prayer, meditation, and worship. By practicing this kind of true leisure, well treated in a book by Josef Pieper , we find ourselves refreshed, and discover renewed motivation and inspiration to contribute to the common good.

Americans are increasingly aware of the problems with Wall Street conservatism and globalist economics. We perceive that our post-Cold War policies are hurting our nation. Naughton's treatise on work and leisure offers the beginnings of a game plan for what might replace them.

Casey Chalk covers religion and other issues for The American Conservative and is a senior writer for Crisis Magazine. He has degrees in history and teaching from the University of Virginia, and a masters in theology from Christendom College.

[Jan 01, 2020] AI is just a tool, unless it is developed to the point of attaining sentience in which case it becomes slavery, but let's ignore that possibility for now. Capitalists cannot make profits from the tools they own all by the tools themselves. Profits come from unpaid labor. You cannot underpay a tool, and the tool cannot labor by itself.

Jan 01, 2020 | www.moonofalabama.org

Paul Damascene , Dec 29 2019 1:28 utc | 45

vk @38: "...the reality on the field is that capitalism is 0 for 5..."

True, but it is worse than that! Even when we get AI to the level you describe, capitalism will continue its decline.

Henry Ford actually understood Marxist analysis. Despite what many people in the present imagine, Ford had access to sufficient engineering talent to make his automobile manufacturing processes much more automated than he did. Ford understood that improving the efficiency of the manufacturing process was less important than creating a population with sufficient income to purchase his products.

AI is just a tool, unless it is developed to the point of attaining sentience in which case it becomes slavery, but let's ignore that possibility for now. Capitalists cannot make profits from the tools they own all by the tools themselves. Profits come from unpaid labor. You cannot underpay a tool, and the tool cannot labor by itself.

The AI can be a product that is sold, but compared with cars, for example, the quantity of labor invested in AI is minuscule. The smaller the proportion of labor that is in the cost of a product, the smaller the percent of the price that can be realized as profit. To re-boost real capitalist profits you need labor-intensive products. This also ties in with Henry Ford's understanding of economics in that a larger labor force also means a larger market for the capitalist's products.

There are some very obvious products that I can think of involving AI that are also massively labor-intensive that would match the scale of the automotive industry and rejuvenate capitalism, but they would require many $millions in R&D to make them market-ready. Since I want capitalism to die already and get out Re: AI --
Always wondered how pseudo-AI, or enhanced automation, might be constrained by diminishing EROEI.

Unless an actual AI were able to crack the water molecule to release hydrogen in an energy-efficient way, or unless we learn to love nuclear (by cracking the nuclear waste issue), then it seems to me hyper-automated workplaces will be at least as subject to plummeting EROEI as are current workplaces, if not moreso. Is there any reason to think that, including embedded energy in their manufacture, these machines and their workplaces will be less energy intensive than current ones?

Continued