Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards Microsoft IntelliType Macros
Open source politics: IBM acquires Red Hat Over 50 and unemployed Shadow IT Is DevOps a yet another "for profit" technocult? Dealing with multiple flavors of Unix Classification of System Administrators Red Hat Certification Program
Unix Configuration Management Tools Job schedulers Unix System Monitoring Red Hat Enterprise Linux Life Cycle Corporate bullshit as a communication method Diplomatic Communication Bosos or Empty Suits (Aggressive Incompetent Managers)
ILO command line interface Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Webliography of problems with "pure" cloud environment The tar pit of Red Hat overcomplexity
Bare metal recovery of Linux systems Number of Servers per Sysadmin Is DevOps a yet another "for profit" technocult Carpal tunnel syndrome Sysadmin Horror Stories Humor Etc


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of  Linux almost from the version 0.92 

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that its like shifting sands. If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes everything, making it worthless.  Or the decision of the brass to switch to a different flavor of Linux does the same. Add you this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Those laments about training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations.  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization, are long in the past. Most training now is via Web and chances for face-to-face communication disappeared.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same or inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). There s also tendency to treat virtual machine and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (the Art of computer programming). He probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM (nobody is now surprised to see a server with 128GB of RAM) while painful are inevitable. The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place likne NYC or SF rent and property prices and valuations are growing while income growth has been stagnant. I turned down several job opportunities in SF and NYC and Silicon Valley because of the crazy cost of living.

Vandalism ofUnix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generarated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Jan 17, 2019] The financial struggles of unplanned retirement

People who are kicked out of their IT jobs around 55 now has difficulties to find even full-time McJobs... Only part time jobs are available. With the current round of layoff and job freezes, neoliberalism in the USA is entering terminal phase, I think.
Jan 17, 2019 | finance.yahoo.com

A survey by Transamerica Center for Retirement Studies found on average Americans are retiring at age 63, with more than half indicating they retired sooner than they had planned. Among them, most retired for health or employment-related reasons.

... ... ...

On April 3, 2018, Linda LaBarbera received the phone call that changed her life forever. "We are outsourcing your work to India and your services are no longer needed, effective today," the voice on the other end of the phone line said.

... ... ...

"It's not like we are starving or don't have a home or anything like that," she says. "But we did have other plans for before we retired and setting ourselves up a little better while we both still had jobs."

... ... ...

Linda hasn't needed to dip into her 401(k) yet. She plans to start collecting Social Security when she turns 70, which will give her the maximum benefit. To earn money and keep busy, Linda has taken short-term contract editing jobs. She says she will only withdraw money from her savings if something catastrophic happens. Her husband's salary is their main source of income.

"I am used to going out and spending money on other people," she says. "We are very generous with our family and friends who are not as well off as we are. So we take care of a lot of people. We can't do that anymore. I can't go out and be frivolous anymore. I do have to look at what we spend - what I spend."

Vogelbacher says cutting costs is essential when living in retirement, especially for those on a fixed income. He suggests moving to a tax-friendly location if possible. Kiplinger ranks Alaska, Wyoming, South Dakota, Mississippi, and Florida as the top five tax-friendly states for retirees. If their health allows, Vogelbacher recommends getting a part-time job. For those who own a home, he says paying off the mortgage is a smart financial move.

... ... ...

Monica is one of the 44 percent of unmarried persons who rely on Social Security for 90 percent or more of their income. At the beginning of 2019, Monica and more than 62 million Americans received a 2.8 percent cost of living adjustment from Social Security. The increase is the largest since 2012.

With the Social Security hike, Monica's monthly check climbed $33. Unfortunately, the new year also brought her a slight increase in what she pays for Medicare; along with a $500 property tax bill and the usual laundry list of monthly expenses.

"If you don't have much, the (Social Security) raise doesn't represent anything," she says with a dry laugh. "But it's good to get it."

[Jan 14, 2019] Safe rm stops you accidentally wipeing the system! @ New Zealand Linux

Jan 14, 2019 | www.nzlinux.com
  1. Francois Marier October 21, 2009 at 10:34 am

    Another related tool, to prevent accidental reboots of servers this time, is molly-guard:

    http://packages.debian.org/sid/molly-guard

    It asks you to type the hostname of the machine you want to reboot as an extra confirmation step.

[Jan 14, 2019] Linux-UNIX xargs command examples

Jan 14, 2019 | www.linuxtechi.com

Example:10 Move files to a different location

linuxtechi@mail:~$ pwd
/home/linuxtechi
linuxtechi@mail:~$ ls -l *.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh

linuxtechi@mail:~$ sudo find . -name "*.sh" -print0 | xargs -0 -I {} mv {} backup/
linuxtechi@mail:~$ ls -ltr backup/

total 0
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh
linuxtechi@mail:~$

[Jan 14, 2019] xargs command tutorial with examples by George Ornbo

Sep 11, 2017 | shapeshed.com
How to use xargs

By default xargs reads items from standard input as separated by blanks and executes a command once for each argument. In the following example standard input is piped to xargs and the mkdir command is run for each argument, creating three folders.

echo 'one two three' | xargs mkdir
ls
one two three
How to use xargs with find

The most common usage of xargs is to use it with the find command. This uses find to search for files or directories and then uses xargs to operate on the results. Typical examples of this are removing files, changing the ownership of files or moving files.

find and xargs can be used together to operate on files that match certain attributes. In the following example files older than two weeks in the temp folder are found and then piped to the xargs command which runs the rm command on each file and removes them.

find /tmp -mtime +14 | xargs rm
xargs v exec {}

The find command supports the -exec option that allows arbitrary commands to be found on files that are found. The following are equivalent.

find ./foo -type f -name "*.txt" -exec rm {} \; 
find ./foo -type f -name "*.txt" | xargs rm

So which one is faster? Let's compare a folder with 1000 files in it.

time find . -type f -name "*.txt" -exec rm {} \;
0.35s user 0.11s system 99% cpu 0.467 total

time find ./foo -type f -name "*.txt" | xargs rm
0.00s user 0.01s system 75% cpu 0.016 total

Clearly using xargs is far more efficient. In fact several benchmarks suggest using xargs over exec {} is six times more efficient.

How to print commands that are executed

The -t option prints each command that will be executed to the terminal. This can be helpful when debugging scripts.

echo 'one two three' | xargs -t rm
rm one two three
How to view the command and prompt for execution

The -p command will print the command to be executed and prompt the user to run it. This can be useful for destructive operations where you really want to be sure on the command to be run. l

echo 'one two three' | xargs -p touch
touch one two three ?...
How to run multiple commands with xargs

It is possible to run multiple commands with xargs by using the -I flag. This replaces occurrences of the argument with the argument passed to xargs. The following prints echos a string and creates a folder.

cat foo.txt
one
two
three

cat foo.txt | xargs -I % sh -c 'echo %; mkdir %'
one 
two
three

ls 
one two three
Further reading

[Jan 10, 2019] When idiots are offloaded to security department, interesting things with network eventually happen

Highly recommended!
Security department often does more damage to the network then any sophisticated hacker can. Especially if they are populated with morons, as they usually are. One of the most blatant examples is below... Those idiots decided to disable Traceroute (which means ICMP) in order to increase security.
Notable quotes:
"... Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems. ..."
"... Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this. ..."
"... Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense. ..."
"... Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply. ..."
"... You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes. ..."
"... You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass. ..."
"... In short, he's a moron. I have reason to suspect you might be, too. ..."
"... No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours. ..."
"... It's another example of security by stupidity which seldom provides security, but always buys added cost. ..."
"... A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net] ..."
"... Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up. ..."
"... Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually. ..."
May 27, 2018 | linux.slashdot.org

jfdavis668 ( 1414919 ) , Sunday May 27, 2018 @11:09AM ( #56682996 )

Re:So ( Score: 5 , Interesting)

Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems.

Anonymous Coward writes:
Re: ( Score: 2 , Insightful)

What is the point? If an intruder is already there couldn't they just upload their own binary?

Hylandr ( 813770 ) , Sunday May 27, 2018 @05:57PM ( #56685274 )
Re: So ( Score: 5 , Interesting)

They can easily. And often time will compile their own tools, versions of Apache, etc..

At best it slows down incident response and resolution while doing nothing to prevent discovery of their networks. If you only use Vlans to segregate your architecture you're boned.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:19PM ( #56683422 )
Re: So ( Score: 5 , Interesting)

Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this.

bferrell ( 253291 ) , Sunday May 27, 2018 @12:20PM ( #56683430 ) Homepage Journal
Re: So ( Score: 4 , Interesting)

Except it DOESN'T secure anything, simply renders things a little more obscure... Since when is obscurity security?

fluffernutter ( 1411889 ) writes:
Re: ( Score: 3 )

Doing something to make things more difficult for a hacker is better than doing nothing to make things more difficult for a hacker. Unless you're lazy, as many of these things should be done as possible.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:37PM ( #56684878 )
Re:So ( Score: 5 , Insightful)

No.

Things like this don't slow down "hackers" with even a modicum of network knowledge inside of a functioning network. What they do slow down is your ability to troubleshoot network problems.

Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense.

mSparks43 ( 757109 ) writes:
Re: So ( Score: 2 )

Pretty much my reaction. like WTF? OTON, redhat flavors all still on glibc2 starting to become a regular p.i.t.a. so the chances of this actually becoming a thing to be concerned about seem very low.

Kinda like gdpr, same kind of groupthink that anyone actually cares or concerns themselves with policy these days.

ruir ( 2709173 ) writes:
Re: ( Score: 3 )

Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply.

Or they can do reverse traceroute at least until the border edge of your firewall via an external site.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:32PM ( #56684858 )
Re:So ( Score: 4 , Insightful)

You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

That's hilarious... I am *the guy* who runs the network. I am our senior network engineer. Every line in every router -- mine.

You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

Nonsense. I conceded that morons may actually go through the work to totally break their PMTUD, IP error signaling channels, and make their nodes "invisible"

I understand "networking" at a level I'm pretty sure you only have a foggy understanding of. I write applications that require layer-2 packet building all the way up to layer-4.

In short, he's a moron. I have reason to suspect you might be, too.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

A CDS is MAC. Turning off ICMP toward people who aren't allowed to access your node/network is understandable. They can't get anything else though, why bother supporting the IP control channel? CDS does *not* say turn off ICMP globally. I deal with CDS, SSAE16 SOC 2, and PCI compliance daily. If your CDS solution only operates with a layer-4 ACL, it's a pretty simple model, or You're Doing It Wrong (TM)

nyet ( 19118 ) writes:
Re: ( Score: 3 )

> I'm not a network person

IOW, nothing you say about networking should be taken seriously.

kevmeister ( 979231 ) , Sunday May 27, 2018 @05:47PM ( #56685234 ) Homepage
Re:So ( Score: 4 , Insightful)

No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours.

The problem is that doing this requires things like packet fragmentation which greatly increases router CPU load and reduces the maximum PPS of your network as well s resulting in dropped packets requiring re-transmission and may also result in widow collapse fallowed with slow-start, though rapid recovery mitigates much of this, it's still not free.

It's another example of security by stupidity which seldom provides security, but always buys added cost.

Hylandr ( 813770 ) writes:
Re: ( Score: 3 )

As a server engineer I am experiencing this with our network team right now.

Do you have some reading that I might be able to further educate myself? I would like to be able to prove to the directors why disabling ICMP on the network may be the cause of our issues.

Zaelath ( 2588189 ) , Sunday May 27, 2018 @07:51PM ( #56685758 )
Re:So ( Score: 4 , Informative)

A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net]

Bing Tsher E ( 943915 ) , Sunday May 27, 2018 @01:22PM ( #56683792 ) Journal
Re: Denying ICMP echo @ server/workstation level t ( Score: 5 , Insightful)

Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up.

Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually.

[Jan 10, 2019] saferm Safely remove files, moving them to GNOME/KDE trash instead of deleting by Eemil Lagerspetz

Jan 10, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
## Made by Eemil Lagerspetz
## Login   <vermind@drache>
## 
## Started on  Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##

version="1.16";

## flags (change these to change default behaviour)
recursive="" # do not recurse into directories by default
verbose="true" # set verbose by default for inexperienced users.
force="" #disallow deleting special files by default
unsafe="" # do not behave like regular rm by default

## possible flags (recursive, verbose, force, unsafe)
# don't touch this unless you want to create/destroy flags
flaglist="r v f u q"

# Colours
blue='\e[1;34m'
red='\e[1;31m'
norm='\e[0m'

## trashbin definitions
# this is the same for newer KDE and GNOME:
trash_desktops="$HOME/.local/share/Trash/files"
# if neither is running:
trash_fallback="$HOME/Trash"

# use .local/share/Trash?
use_desktop=$( ps -U $USER | grep -E "gnome-settings|startkde|mate-session|mate-settings|mate-panel|gnome-shell|lxsession|unity" )

# mounted filesystems, for avoiding cross-device move on safe delete
filesystems=$( mount | awk '{print $3; }' )

if [ -n "$use_desktop" ]; then
    trash="${trash_desktops}"
    infodir="${trash}/../info";
    for k in "${trash}" "${infodir}"; do
        if [ ! -d "${k}" ]; then mkdir -p "${k}"; fi
    done
else
    trash="${trash_fallback}"
fi

usagemessage() {
        echo -e "This is ${blue}saferm.sh$norm $version. LXDE and Gnome3 detection.
    Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
    Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
    Does not complain about different user any more.\n";
        echo -e "Usage: ${blue}/path/to/saferm.sh$norm [${blue}OPTIONS$norm] [$blue--$norm] ${blue}files and dirs to safely remove$norm"
        echo -e "${blue}OPTIONS$norm:"
        echo -e "$blue-r$norm      allows recursively removing directories."
        echo -e "$blue-f$norm      Allow deleting special files (devices, ...)."
  echo -e "$blue-u$norm      Unsafe mode, bypass trash and delete files permanently."
        echo -e "$blue-v$norm      Verbose, prints more messages. Default in this version."
  echo -e "$blue-q$norm      Quiet mode. Opposite of verbose."
        echo "";
}

detect() {
    if [ ! -e "$1" ]; then fs=""; return; fi
    path=$(readlink -f "$1")
    for det in $filesystems; do
        match=$( echo "$path" | grep -oE "^$det" )
        if [ -n "$match" ]; then
            if [ ${#det} -gt ${#fs} ]; then
                fs="$det"
            fi
        fi
    done
}


trashinfo() {
#gnome: generate trashinfo:
        bname=$( basename -- "$1" )
    fname="${trash}/../info/${bname}.trashinfo"
    cat < "${fname}"
[Trash Info]
Path=$PWD/${1}
DeletionDate=$( date +%Y-%m-%dT%H:%M:%S )
EOF
}

setflags() {
    for k in $flaglist; do
        reduced=$( echo "$1" | sed "s/$k//" )
        if [ "$reduced" != "$1" ]; then
            flags_set="$flags_set $k"
        fi
    done
  for k in $flags_set; do
        if [ "$k" == "v" ]; then
            verbose="true"
        elif [ "$k" == "r" ]; then 
            recursive="true"
        elif [ "$k" == "f" ]; then 
            force="true"
        elif [ "$k" == "u" ]; then 
            unsafe="true"
        elif [ "$k" == "q" ]; then 
    unset verbose
        fi
  done
}

performdelete() {
                        # "delete" = move to trash
                        if [ -n "$unsafe" ]
                        then
                          if [ -n "$verbose" ];then echo -e "Deleting $red$1$norm"; fi
                    #UNSAFE: permanently remove files.
                    rm -rf -- "$1"
                        else
                          if [ -n "$verbose" ];then echo -e "Moving $blue$k$norm to $red${trash}$norm"; fi
                    mv -b -- "$1" "${trash}" # moves and backs up old files
                        fi
}

askfs() {
  detect "$1"
  if [ "${fs}" != "${tfs}" ]; then
    unset answer;
    until [ "$answer" == "y" -o "$answer" == "n" ]; do
      echo -e "$blue$1$norm is on $blue${fs}$norm. Unsafe delete (y/n)?"
      read -n 1 answer;
    done
    if [ "$answer" == "y" ]; then
      unsafe="yes"
    fi
  fi
}

complain() {
  msg=""
  if [ ! -e "$1" -a ! -L "$1" ]; then # does not exist
    msg="File does not exist:"
        elif [ ! -w "$1" -a ! -L "$1" ]; then # not writable
    msg="File is not writable:"
        elif [ ! -f "$1" -a ! -d "$1" -a -z "$force" ]; then # Special or sth else.
        msg="Is not a regular file or directory (and -f not specified):"
        elif [ -f "$1" ]; then # is a file
    act="true" # operate on files by default
        elif [ -d "$1" -a -n "$recursive" ]; then # is a directory and recursive is enabled
    act="true"
        elif [ -d "$1" -a -z "${recursive}" ]; then
                msg="Is a directory (and -r not specified):"
        else
                # not file or dir. This branch should not be reached.
                msg="No such file or directory:"
        fi
}

asknobackup() {
  unset answer
        until [ "$answer" == "y" -o "$answer" == "n" ]; do
          echo -e "$blue$k$norm could not be moved to trash. Unsafe delete (y/n)?"
          read -n 1 answer
        done
        if [ "$answer" == "y" ]
        then
          unsafe="yes"
          performdelete "${k}"
          ret=$?
                # Reset temporary unsafe flag
          unset unsafe
          unset answer
        else
          unset answer
        fi
}

deletefiles() {
  for k in "$@"; do
          fdesc="$blue$k$norm";
          complain "${k}"
          if [ -n "$msg" ]
          then
                  echo -e "$msg $fdesc."
    else
        #actual action:
        if [ -z "$unsafe" ]; then
          askfs "${k}"
        fi
                  performdelete "${k}"
                  ret=$?
                  # Reset temporary unsafe flag
                  if [ "$answer" == "y" ]; then unset unsafe; unset answer; fi
      #echo "MV exit status: $ret"
      if [ ! "$ret" -eq 0 ]
      then 
        asknobackup "${k}"
      fi
      if [ -n "$use_desktop" ]; then
          # generate trashinfo for desktop environments
        trashinfo "${k}"
      fi
    fi
        done
}

# Make trash if it doesn't exist
if [ ! -d "${trash}" ]; then
    mkdir "${trash}";
fi

# find out which flags were given
afteropts=""; # boolean for end-of-options reached
for k in "$@"; do
        # if starts with dash and before end of options marker (--)
        if [ "${k:0:1}" == "-" -a -z "$afteropts" ]; then
                if [ "${k:1:2}" == "-" ]; then # if end of options marker
                        afteropts="true"
                else # option(s)
                    setflags "$k" # set flags
                fi
        else # not starting with dash, or after end-of-opts
                files[++i]="$k"
        fi
done

if [ -z "${files[1]}" ]; then # no parameters?
        usagemessage # tell them how to use this
        exit 0;
fi

# Which fs is trash on?
detect "${trash}"
tfs="$fs"

# do the work
deletefiles "${files[@]}"



[Jan 08, 2019] Bind DNS threw a (network unreachable) error CentOS

Jan 08, 2019 | www.reddit.com

submitted 11 days ago by


mr-bope

Bind 9 on my CentOS 7.6 machine threw this error:
error (network unreachable) resolving './DNSKEY/IN': 2001:7fe::53#53
error (network unreachable) resolving './NS/IN': 2001:7fe::53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:a8::e#53
error (network unreachable) resolving './NS/IN': 2001:500:a8::e#53
error (FORMERR) resolving './NS/IN': 198.97.190.53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:dc3::35#53
error (network unreachable) resolving './NS/IN': 2001:dc3::35#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:2d::d#53
error (network unreachable) resolving './NS/IN': 2001:500:2d::d#53
managed-keys-zone: Unable to fetch DNSKEY set '.': failure

What does it mean? Can it be fixed?

And is it at all related with DNSSEC cause I cannot seem to get it working whatsoever.

cryan7755 1 point 2 points 3 points 11 days ago (1 child)
Looks like failure to reach ipv6 addressed NS servers. If you don't utilize ipv6 on your network then this should be expected.
knobbysideup 1 point 2 points 3 points 11 days ago (0 children)
Can be dealt with by adding
#/etc/sysconfig/named
OPTIONS="-4"

[Jan 01, 2019] Re: customize columns in single panel view

Jun 12, 2017 | mail.gnome.org
On 6/12/17, Karel <lists vcomp ch> wrote:
Hello,

Is it possible to customize the columns in the single panel view ?

For my default (two panel) view, I have customized it using:

 -> Listing Mode
   (*) User defined:
      half type name | size:15 | mtime

however, when I switch to the single panel view, there are different
columns (obviously):

  Permission   Nl   Owner   Group   Size   Modify time   Name

For instance, I need to change the width of "Size" to 15.
No, you can't change the format of the "Long" listing-mode.

(You can make the "User defined" listing-mode display in one panel (by
changing "half" to "full"), but this is not what you want.)

So, you have two options:

(1) Modify the source code (search panel.c for "full perm space" and
tweak it); or:

(2) Use mc^2. It allows you to do this. (It already comes with a
snippet that enlarges the "Size" field a bit so there'd be room for
the commas (or other locale-dependent formatting) it adds. This makes
reading long numbers much easier.)

[Jan 01, 2019] Re- Help- meaning of the panelize command in left-right menus

Feb 17, 2017 | mail.gnome.org


On Thu, Feb 16, 2017 at 01:25:22PM +1300, William Kimber wrote:
Briefly,  if you do a search over several directories you can put all those
files into a single panel. Not withstanding that they are from different
directories.
I'm not sure I understand what you mean here; anyway I noticed that if you do a
search using the "Find file" (M-?) command, choose "Panelize" (at the bottom
of the "Find File" popup window), then change to some other directory (thus
exiting from panelized mode), if you now choose Left -> Panelize, you can recall
the panelized view of the last "Find file" results. Is this what you mean?

However this seems to work only with panelized results coming from the
"Find file" command, not with results from the "External panelize" command:
if I change directory, and then choose Left -> Panelize I get an empty panel.
Is this a bug?

Cri

[Jan 01, 2019] Re- Help- meaning of the panelize command in left-right menus

Jan 01, 2019 | mail.gnome.org

Re: Help: meaning of the panelize command in left/right menus



On Thu, Feb 16, 2017 at 01:25:22PM +1300, William Kimber wrote:
Briefly,  if you do a search over several directories you can put all those
files into a single panel. Not withstanding that they are from different
directories.
I'm not sure I understand what you mean here; anyway I noticed that if you do a
search using the "Find file" (M-?) command, choose "Panelize" (at the bottom
of the "Find File" popup window), then change to some other directory (thus
exiting from panelized mode), if you now choose Left -> Panelize, you can recall
the panelized view of the last "Find file" results. Is this what you mean?

However this seems to work only with panelized results coming from the
"Find file" command, not with results from the "External panelize" command:
if I change directory, and then choose Left -> Panelize I get an empty panel.
Is this a bug?

Cri

[Jan 01, 2019] Re- customize columns in single panel view

Jan 01, 2019 | mail.gnome.org
On 6/12/17, Karel <lists vcomp ch> wrote:
Hello,

Is it possible to customize the columns in the single panel view ?

For my default (two panel) view, I have customized it using:

 -> Listing Mode
   (*) User defined:
      half type name | size:15 | mtime

however, when I switch to the single panel view, there are different
columns (obviously):

  Permission   Nl   Owner   Group   Size   Modify time   Name

For instance, I need to change the width of "Size" to 15.
No, you can't change the format of the "Long" listing-mode.

(You can make the "User defined" listing-mode display in one panel (by
changing "half" to "full"), but this is not what you want.)

So, you have two options:

(1) Modify the source code (search panel.c for "full perm space" and
tweak it); or:

(2) Use mc^2. It allows you to do this. (It already comes with a
snippet that enlarges the "Size" field a bit so there'd be room for
the commas (or other locale-dependent formatting) it adds. This makes
reading long numbers much easier.)

[Jan 01, 2019] %f macro in mcedit

Jan 01, 2019 | mail.gnome.org

    
Hi!
My mc version:
$ mc --version
GNU Midnight Commander 4.8.19
System: Fedora 24

I just want to tell you that %f macro in mcedit is not correct. It
contains the current file name that is selected in the panel but not
the actual file name that is opened in mcedit.

I created the mcedit item to run C++ program:
+= f \.cpp$
r       Run
    clear
    app_path=/tmp/$(uuidgen)
    if g++ -o $app_path "%f"; then
        $app_path
        rm $app_path
    fi
    echo 'Press any key to exit.'
    read -s -n 1

Imagine that I opened the file a.cpp in mcedit.
Then I pressed alt+` and switched to panel.
Then I selected (or even opened in mcedit) the file b.cpp.
Then I pressed alt+` and switched to mcedit with a.cpp.
Then I executed the "Run" item from user menu.
And... The b.cpp will be compiled and run. This is wrong! Why b.cpp???
I executed "Run" from a.cpp!

I propose you to do the new macros for mcedit.

%opened_file
- the file name that is opened in current instance of mcedit.

%opened_file_full_path
- as %opened_file but full path to that file.

I think that %opened_file may be not safe because the current
directory may be changed in mc panel. So it is better to use
%opened_file_full_path.

%opened_file_dir
- full path to directory where %opened_file is.

%save
- save opened file before executing the menu commands. May be useful
in some cases. For example I don't want to press F2 every time before
run changed code.

Thanks for the mc.
Best regards, Sergiy Vovk.

[Jan 01, 2019] Re- Setting left and right panel directories at startup

Jan 01, 2019 | mail.gnome.org

Re: Setting left and right panel directories at startup



Sorry, forgot to reply all.
I said that, personally, I would put ~/Documents in the directory hotlist and get there via C-\.

On Sun, Mar 18, 2018 at 5:38 PM, Keith Roberts < keith karsites net > wrote:

On 18/03/18 20:14, wwp wrote:

Hello Keith,

On Sun, 18 Mar 2018 19:14:33 +0000 Keith Roberts < keith karsites net > wrote:

Hi all,

I found this in /home/keith/.config/mc/panels. ini

[Dirs]
current_is_left=true
other_dir=/home/keith/Document s/

I'd like mc to open /home/keith/Documents/ in the left panel as well whenever I start mc up, so both panels are showing the /home/keith/Documents/ directory.

Is there some way to tell mc how to do this please?

I think you could use: `mc <path> <path>`, for instance:
`mc /home/keith/Documents/ /tmp`, but of course this requires you to know
the second path to open in addition to your ~/Documents. Not really
satisfying?

Regards,

Hi wwp,

Thanks for your suggestion and that seems to work OK - I just start mc with the following command:

mc ~/Documents

and both panes are opened at the ~Documents directories now which is fine.

Kind Regards,

Keith Roberts

[Jan 01, 2019] Mc2 by mooffie

Notable quotes:
"... Future Releases ..."
Jan 01, 2019 | midnight-commander.org

#3745 (Integration mc with mc2(Lua)) – Midnight Commander

Ticket #3745 (closed enhancement: invalid)

Opened 2 years ago

Last modified 2 years ago Integration mc with mc2(Lua)

Reported by: q19l405n5a Owned by:
Priority: major Milestone:
Component: mc-core Version: master
Keywords: Cc:
Blocked By: Blocking:
Branch state: no branch Votes for changeset:
Description I think that it is necessary that code base mc and mc2 correspond each other. mooffie? can you check that patches from andrew_b easy merged with mc2 and if some patch conflict with mc2 code hold this changes by writing about in corresponding ticket. zaytsev can you help automate this( continues integration, travis and so on). Sorry, but some words in Russian:

Ребята, я не пытаюсь давать ЦУ, Вы делаете классную работу. Просто яхотел обратить внимание, что Муфья пытается поддерживать свой код в актуальном состоянии, но видя как у него возникают проблемы на ровном месте боюсь энтузиазм у него может пропасть.
Change History comment:1 Changed 2 years ago by zaytsev-work

​ https://mail.gnome.org/archives/mc-devel/2016-February/msg00021.html

I have asked what plans does mooffie have for mc 2 sometime ago and never got an answer. Note that I totally don't blame him for that. Everyone here is working at their own pace. Sometimes I disappear for weeks or months, because I can't get a spare 5 minutes not even speaking of several hours due to the non-mc related workload. I hope that one day we'll figure out the way towards merging it, and eventually get it done.

In the mean time, he's working together with us by offering extremely important and well-prepared contributions, which are a pleasure to deal with and we are integrating them as fast as we can, so it's not like we are at war and not talking to each other.

Anyways, creating random noise in the ticket tracking system will not help to advance your cause. The only way to influence the process is to invest serious amount of time in the development.
comment:2 Changed 2 years ago by zaytsev

Lua-l - [ANN] mc^2

Selected post Oct 15, 2015; 12:13pm [ANN] mc^2
Mooffie 11 posts mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better
file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you
aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there
and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope
protected by default; good pretty printer for Lua tables; calculator
where you can type Lua expressions; the editor can "lint" Lua code (and
flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the
goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 01, 2019] mc - How can I set the default (user defined) listing mode in Midnight Commander- - Unix Linux Stack Exchange

Jan 01, 2019 | unix.stackexchange.com

Ask Question 0

papaiatis ,Jul 14, 2016 at 11:51

I defined my own listing mode and I'd like to make it permanent so that on the next mc start my defined listing mode will be set. I found no configuration file for mc.

,

You have probably Auto save setup turned off in Options->Configuration menu.

You can save the configuration manually by Options->Save setup .

Panels setup is saved to ~/.config/mc/panels.ini .

[Jan 01, 2019] Lua-l - [ANN] mc^2

Jan 01, 2019 | n2.nabble.com

Selected post Oct 15, 2015; 12:13pm [ANN] mc^2

Mooffie 11 posts mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better
file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you
aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there
and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope
protected by default; good pretty printer for Lua tables; calculator
where you can type Lua expressions; the editor can "lint" Lua code (and
flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the
goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 01, 2019] Re change default configuration

Jan 01, 2019 | mail.gnome.org
On Fri, 27 Jul 2018 17:01:17 +0300 Sergey Naumov via mc-devel wrote:
I'm curious whether there is a way to change default configuration that is
generated when user invokes mc for the first time?

For example, I want "use_internal_edit" to be true by default instead of
false for any new user.
In vanilla mc the initial value of use_internal_edit is true. Some distros
(Debian and some others) change this to false.
If there is a way to do it, then is it possible to just use lines that I
want to change, not the whole configuration, say

[Midnight-Commander]
use_internal_edit=true
Before first run, ~/.config/mc/ini doesn't exist.
If ~/.config/mc/ini doesn't exist, /etc/mc/mc.ini is used.
If /etc/mc/mc.ini doesn't exist, /usr/share/mc/mc.ini is used.
You can create one of these files with required default settings set.

Unfortunately, there is no info about /etc/mc/mc.ini in the man page.
I'll fix that at this weekend.

[Jan 01, 2019] Re does mc support sftp

Jan 01, 2019 | mail.gnome.org

Yes, it does, if it has been compiled accordingly.

http://www.linux-databook.info/wp-content/uploads/2015/04/MC-02.jpeg

On Thu, 15 Nov 2018, Fourhundred Thecat wrote:

Hello,

I need to connect to server where I don't have shell access (no ssh)

the server only allows sftp. I can connect with winscp, for instance.

does mc support sftp  as well ?

thanks,
_______________________________________________
mc mailing list
https://mail.gnome.org/mailman/listinfo/mc

--
Sincerely yours,
Yury V. Zaytsev

[Jan 01, 2019] Re: Ctrl+J in mc

Jan 01, 2019 | mail.gnome.org

, Thomas Zajic

* Ivan Pizhenko via mc-devel, 28.10.18 21:52

Hi, I'm wondering why following happens:
In Ubuntu and FreeBSD, when I am pressing Ctrl+J in MC, it puts name
of file on which file cursor is currently on. But this doesn't work in
CentOS and RHEL.
How to fix that in CentOS and RHEL?
Ivan.
Never heard about Ctrl+j, I always used Alt+Enter for that purpose.
Alt+a does the same thing for the path, BTW (just in case you didn't
know). :-)

HTH,
Thomas
_______________________________________________
mc-devel mailing list
https://mail.gnome.org/mailman/listinfo/mc-devel

[Jan 01, 2019] IBM Systems Magazine - All Hail the Midnight Commander! by Jesse Gorzinski

Notable quotes:
"... Sometimes, though, a tool is just too fun to pass up; such is the case for Midnight Commander! Of course, we also had numerous requests for it, and that helped, too! Today, let's explore this useful utility. ..."
Nov 27, 2018 | ibmsystemsmag.com

Quite often, I'm asked how open source deliveries are prioritized at IBM. The answer isn't simple. Even after we estimate the cost of a project, there are many factors to consider. For instance, does it enable a specific solution to run? Does it expand a programming language's abilities? Is it highly-requested by the community or vendors?

Sometimes, though, a tool is just too fun to pass up; such is the case for Midnight Commander! Of course, we also had numerous requests for it, and that helped, too! Today, let's explore this useful utility.

... ... ...

Getting Started
Installing Midnight Commander is easy. Once you have the yum package manager , use it to install the 'mc' package.

In order for the interface to display properly, you'll want to set the LC_ALL environment variable to a UTF-8 locale. For instance, "EN_US.UTF-8" would work just fine. You can have this done automatically by putting the following lines in your $HOME/.profile file (or $HOME/.bash_profile):

LC_ALL=EN_US.UTF-8
export LC_ALL

If you haven't done so already, you might want to also make sure the PATH environment variable is set up to use the new open source tools .

Once that's done, you can run 'mc -c' from your SSH terminal . (You didn't expect this to work from QSH, did you?) If you didn't set up your environment variables, you can just run 'LC_ALL=EN_US.UTF-8 /QOpenSys/pkgs/bin/mc -c' instead. I recommend the '-c' option because it enables colors.

A Community Effort
As with many things open source, IBM was not the only contributor. In this particular case, a "tip of the hat" goes to Jack Woehr. You may remember Jack as the creator of Ublu , an open source programming language for IBM i. He also hosts his own RPM repository with lynx, a terminal-based web browser (perhaps a future topic?). The initial port of Midnight Commander was collaboratively done with work from both parties. Jack also helped with quality assurance and worked with project owners to upstream all code changes. In fact, the main code stream for Midnight Commander can now be built for IBM i with no modifications.

Now that we've delivered hundreds of open source packages, it seems like there's something for everybody. This seems like one of those tools that is useful for just about anyone. And with a name like "Midnight Commander," how can you go wrong? Try it today!

[Jan 01, 2019] NEWS-4.8.22 Midnight Commander

Looks like they fixed sftp problems and it is now usale.
Jan 01, 2019 | midnight-commander.org
View all closed tickets for this release Major changes since 4.8.21 Core VFS Editor Viewer Diff viewer Misc Fixes

[Dec 24, 2018] Phone in sick: its a small act of rebellion against wage slavery

Notable quotes:
"... By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more. ..."
"... How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish. ..."
"... I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. ..."
Oct 24, 2015 | The Guardian

"Phoning in sick is a revolutionary act." I loved that slogan. It came to me, as so many good things did, from Housmans, the radical bookshop in King's Cross. There you could rummage through all sorts of anarchist pamphlets and there I discovered, in the early 80s, the wondrous little magazine Processed World. It told you basically how to screw up your workplace. It was smart and full of small acts of random subversion. In many ways it was ahead of its time as it was coming out of San Francisco and prefiguring Silicon Valley. It saw the machines coming. Jobs were increasingly boring and innately meaningless. Workers were "data slaves" working for IBM ("Intensely Boring Machines").

What Processed World was doing was trying to disrupt the identification so many office workers were meant to feel with their management, not through old-style union organising, but through small acts of subversion. The modern office, it stressed, has nothing to do with human need. Its rebellion was about working as little as possible, disinformation and sabotage. It was making alienation fun. In 1981, it could not have known that a self-service till cannot ever phone in sick.

I was thinking of this today, as I wanted to do just that. I have made myself ill with a hangover. A hangover, I always feel, is nature's way of telling you to have a day off. One can be macho about it and eat your way back to sentience via the medium of bacon sandwiches and Maltesers. At work, one is dehydrated, irritable and only semi-present. Better, surely, though to let the day fall through you and dream away.

Having worked in America, though, I can say for sure that they brook no excuses whatsoever. When I was late for work and said things like, "My alarm clock did not go off", they would say that this was not a suitable explanation, which flummoxed me. I had to make up others. This was just to work in a shop.

This model of working – long hours, very few holidays, few breaks, two incomes needed to raise kids, crazed loyalty demanded by huge corporations, the American way – is where we're heading. Except now the model is even more punishing. It is China. We are expected to compete with an economy whose workers are often closer to indentured slaves than anything else.

This is what striving is, then: dangerous, demoralising, often dirty work. Buckle down. It's the only way forward, apparently, which is why our glorious leaders are sucking up to China, which is immoral, never mind ridiculously short-term thinking.

So again I must really speak up for the skivers. What we have to understand about austerity is its psychic effects. People must have less. So they must have less leisure, too. The fact is life is about more than work and work is rapidly changing. Skiving in China may get you killed but here it may be a small act of resistance, or it may just be that skivers remind us that there is meaning outside wage-slavery.

Work is too often discussed by middle-class people in ways that are simply unrecognisable to anyone who has done crappy jobs. Much work is not interesting and never has been. Now that we have a political and media elite who go from Oxbridge to working for a newspaper or a politician, a lot of nonsense is spouted. These people have not cleaned urinals on a nightshift. They don't sit lonely in petrol stations manning the till. They don't have to ask permission for a toilet break in a call centre. Instead, their work provides their own special identity. It is very important.

Low-status jobs, like caring, are for others. The bottom-wipers of this world do it for the glory, I suppose. But when we talk of the coming automation that will reduce employment, bottom-wiping will not be mechanised. Nor will it be romanticised, as old male manual labour is. The mad idea of reopening the coal mines was part of the left's strange notion of the nobility of labour. Have these people ever been down a coal mine? Would they want that life for their children?

Instead we need to talk about the dehumanising nature of work. Bertrand Russell and Keynes thought our goal should be less work, that technology would mean fewer hours.

Far from work giving meaning to life, in some surveys 40% of us say that our jobs are meaningless. Nonetheless, the art of skiving is verboten as we cram our children with ever longer hours of school and homework. All this striving is for what exactly? A soul-destroying job?

Just as education is decided by those who loved school, discussions about work are had by those to whom it is about more than income.

The parts of our lives that are not work – the places we dream or play or care, the space we may find creative – all these are deemed outside the economy. All this time is unproductive. But who decides that?

Skiving work is bad only to those who know the price of everything and the value of nothing.

So go on: phone in sick. You know you want to.

friedad 23 Oct 2015 18:27

We now exist in a society in which the Fear Cloud is wrapped around each citizen. Our proud history of Union and Labor, fighting for decent wages and living conditions for all citizens, and mostly achieving these aims, a history, which should be taught to every child educated in every school in this country, now gradually but surely eroded by ruthless speculators in government, is the future generations are inheriting. The workforce in fear of taking a sick day, the young looking for work in fear of speaking out at diminishing rewards, definitely this 21st Century is the Century of Fear. And how is this fear denied, with mind blowing drugs, regardless if it is is alcohol, description drugs, illicit drugs, a society in denial. We do not require a heavenly object to destroy us, a few soulless monsters in our mist are masters of manipulators, getting closer and closer to accomplish their aim of having zombies doing their beckoning. Need a kidney, no worries, zombie dishwasher, is handy for one. Oh wait that time is already here.

Hemulen6 23 Oct 2015 15:06

Oh join the real world, Suzanne! Many companies now have a limit to how often you can be sick. In the case of the charity I work for it's 9 days a year. I overstepped it, I was genuinely sick, and was hauled up in front of Occupational Health. That will now go on my record and count against me. I work for a cancer care charity. Irony? Surely not.

AlexLeo -> rebel7 23 Oct 2015 13:34

Which is exactly my point. You compete on relevant job skills and quality of your product, not what school you have attended.

Yes, there are thousands, tens of thousands of folks here around San Jose who barely speak English, but are smart and hard working as hell and it takes them a few years to get to 150-200K per year, Many of them get to 300-400K, if they come from strong schools in their countries of origin, compared to the 10k or so where they came from, but probably more than the whining readership here.

This is really difficult to swallow for the Brits back in Britain, isn't it. Those who have moved over have experiences the type of social mobility unthinkable in Britain, but they have had to work hard and get to 300K-700K per year, much better than the 50-100K their parents used to make back in GB. These are averages based on personal interactions with say 50 Brits in the last 15 + years, all employed in the Silicon Valley in very different jobs and roles.

Todd Owens -> Scott W 23 Oct 2015 11:00

I get what you're saying and I agree with a lot of what you said. My only gripe is most employees do not see an operation from a business owner or managerial / financial perspective. They don't understand the costs associated with their performance or lack thereof. I've worked on a lot of projects that we're operating at a loss for a future payoff. When someone decides they don't want to do the work they're contracted to perform that can have a cascading effect on the entire company.

All in all what's being described is for the most part misguided because most people are not in the position or even care to evaluate the particulars. So saying you should do this to accomplish that is bullshit because it's rarely such a simple equation. If anything this type of tactic will leaf to MORE loss and less money for payroll.


weematt -> Barry1858 23 Oct 2015 09:04

Sorry you just can't have a 'nicer' capitalism.

War ( business by other means) and unemployment ( you can't buck the market), are inevitable concomitants of capitalist competition over markets, trade routes and spheres of interests. (Remember the war science of Nagasaki and Hiroshima from the 'good guys' ?)
"..capital comes dripping from head to foot, from every pore, with blood and dirt". (Marx)

You can't have full employment, or even the 'Right to Work'.

There is always ,even in boom times a reserve army of unemployed, to drive down wages. (If necessary they will inject inflation into the economy)
Unemployment is currently 5.5 percent or 1,860,000 people. If their "equilibrium rate" of unemployment is 4% rather than 5% this would still mean 1,352,000 "need be unemployed". The government don't want these people to find jobs as it would strengthen workers' bargaining position over wages, but that doesn't stop them harassing them with useless and petty form-filling, reporting to the so-called "job centre" just for the sake of it, calling them scroungers and now saying they are mentally defective.
Government is 'over' you not 'for' you.

Governments do not exist to ensure 'fair do's' but to manage social expectations with the minimum of dissent, commensurate with the needs of capitalism in the interests of profit.

Worker participation amounts to self managing workers self exploitation for the maximum of profit for the capitalist class.

Exploitation takes place at the point of production.

" Instead of the conservative motto, 'A fair day's wage for a fair day's work!' they ought to inscribe on their banner the revolutionary watchword, 'Abolition of the wages system!'"

Karl Marx [Value, Price and Profit]

John Kellar 23 Oct 2015 07:19

Fortunately; as a retired veteran I don't have to worry about phoning in sick.However; during my Air Force days if you were sick, you had to get yourself to the Base Medical Section and prove to a medical officer that you were sick. If you convinced the medical officer of your sickness then you may have been luck to receive on or two days sick leave. For those who were very sick or incapable of getting themselves to Base Medical an ambulance would be sent - promptly.


Rchrd Hrrcks -> wumpysmum 23 Oct 2015 04:17

The function of civil disobedience is to cause problems for the government. Let's imagine that we could get 100,000 people to agree to phone in sick on a particular date in protest at austerity etc. Leaving aside the direct problems to the economy that this would cause. It would also demonstrate a willingness to take action. It would demonstrate a capability to organise mass direct action. It would demonstrate an ability to bring people together to fight injustice. In and of itself it might not have much impact, but as a precedent set it could be the beginning of something massive, including further acts of civil disobedience.


wumpysmum Rchrd Hrrcks 23 Oct 2015 03:51

There's already a form of civil disobedience called industrial action, which the govt are currently attacking by attempting to change statute. Random sickies as per my post above are certainly not the answer in the public sector at least, they make no coherent political point just cause problems for colleagues. Sadly too in many sectors and with the advent of zero hours contracts sickies put workers at risk of sanctions and lose them earnings.


Alyeska 22 Oct 2015 22:18

I'm American. I currently have two jobs and work about 70 hours a week, and I get no paid sick days. In fact, the last time I had a job with a paid sick day was 2001. If I could afford a day off, you think I'd be working 70 hours a week?

I barely make rent most months, and yes... I have two college degrees. When I try to organize my coworkers to unionize for decent pay and benefits, they all tell me not to bother.... they are too scared of getting on management's "bad side" and "getting in trouble" (yes, even though the law says management can't retaliate.)

Unions are different in the USA than in the UK. The workforce has to take a vote to unionize the company workers; you can't "just join" a union here. That's why our pay and working conditions have gotten worse, year after year.


rtb1961 22 Oct 2015 21:58

By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more.

Pay less attention to advertising and more attention to the enjoyable simplicity of life, of real direct human relationships, all of them, the ones in passing where you wish a stranger well, chats with service staff to make their life better as well as your own, exchange thoughts and ideas with others, be a human being and share humanity with other human beings.

Mkjaks 22 Oct 2015 20:35

How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish.

toffee1 22 Oct 2015 19:13

It is only considered productive if it feeds the beast, that is, contribute to the accumulation of capital so that the beast can have more power over us. The issue here is the wage labor. The 93 percent of the U.S. working population perform wage labor (see BLS site). It is the highest proportion in any society ever came into history. Under the wage labor (employment) contract, the worker gives up his/her decision making autonomy. The worker accepts the full command of his/her employer during the labor process. The employer directs and commands the labor process to achieve the goals set by himself. Compare this, for example, self-employed providing a service (for example, a plumber). In this case, the customer describes the problem to the service provider but the service provider makes all the decisions on how to organize and apply his labor to solve the problem. Or compare it to a democratically organized coop, where workers make all the decisions collectively, where, how and what to produce. Under the present economic system, a great majority of us are condemned to work in large corporations performing wage labor. The system of wage labor stripping us from autonomy on our own labor, creates all the misery in our present world through alienation. Men and women lose their humanity alienated from their own labor. Outside the world of wage labor, labor can be a source self-realization and true freedom. Labor can be the real fulfillment and love. Labor together our capacity to love make us human. Bourgeoisie dehumanized us steeling our humanity. Bourgeoisie, who sold her soul to the beast, attempting to turn us into ever consuming machines for the accumulation of capital.

patimac54 -> Zach Baker 22 Oct 2015 17:39

Well said. Most retail employers have cut staff to the minimum possible to keep the stores open so if anyone is off sick, it's the devil's own job trying to just get customers served. Making your colleagues work even harder than they normally do because you can't be bothered to act responsibly and show up is just plain selfish.
And sorry, Suzanne, skiving work is nothing more than an act of complete disrespect for those you work with. If you don't understand that, try getting a proper job for a few months and learn how to exercise some self control.

TettyBlaBla -> FranzWilde 22 Oct 2015 17:25

It's quite the opposite in government jobs where I am in the US. As the fiscal year comes to a close, managers look at their budgets and go on huge spending sprees, particularly for temp (zero hours in some countries) help and consultants. They fear if they don't spend everything or even a bit more, their spending will be cut in the next budget. This results in people coming in to do work on projects that have no point or usefulness, that will never be completed or even presented up the food chain of management, and ends up costing taxpayers a small fortune.

I did this one year at an Air Quality Agency's IT department while the paid employees sat at their desks watching portable televisions all day. It was truly demeaning.

oommph -> Michael John Jackson 22 Oct 2015 16:59

Thing is though, children - dependents to pay for - are the easiest way to keep yourself chained to work.

The homemaker model works as long as your spouse's employer retains them (and your spouse retains you in an era of 40% divorce).

You are just as dependent on an employer and "work" but far less in control of it now.


Zach Baker 22 Oct 2015 16:41

I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. If I found out one of my co-workers called in because of a hangover, I'd be pissed. You made the choice to get drunk, knowing that you had to work the following morning. Putting it into the same category of someone who is sick and may not have the luxury of taking off because of a bad employer is insulting.


[Dec 14, 2018] 10 of the best pieces of IT advice I ever heard

Dec 14, 2018 | www.techrepublic.com
  1. Learn to say "no"

    If you're new to the career, chances are you'll be saying "yes" to everything. However, as you gain experience and put in your time, the word "no" needs to creep into your vocabulary. Otherwise, you'll be exploited.

    Of course, you have to use this word with caution. Should the CTO approach and set a task before you, the "no" response might not be your best choice. But if you find end users-and friends-taking advantage of the word "yes," you'll wind up frustrated and exhausted at the end of the day.

  2. Be done at the end of the day

    I used to have a ritual at the end of every day. I would take off my watch and, at that point, I was done... no more work. That simple routine saved my sanity more often than not. I highly suggest you develop the means to inform yourself that, at some point, you are done for the day. Do not be that person who is willing to work through the evening and into the night... or you'll always be that person.

  3. Don't beat yourself up over mistakes made

    You are going to make mistakes. Sometimes will be simple and can be quickly repaired. Others may lean toward the catastrophic. But when you finally call your IT career done, you will have made plenty of mistakes. Beating yourself up over them will prevent you from moving forward. Instead of berating yourself, learn from the mistakes so you don't repeat them.

  4. Always have something nice to say

    You work with others on a daily basis. Too many times I've watched IT pros become bitter, jaded people who rarely have anything nice or positive to say. Don't be that person. If you focus on the positive, people will be more inclined to enjoy working with you, companies will want to hire you, and the daily grind will be less "grindy."

  5. Measure twice, cut once

    How many times have you issued a command or clicked OK before you were absolutely sure you should? The old woodworking adage fits perfectly here. Considering this simple sentence-before you click OK-can save you from quite a lot of headache. Rushing into a task is never the answer, even during an emergency. Always ask yourself: Is this the right solution?

  6. At every turn, be honest

    I've witnessed engineers lie to avoid the swift arm of justice. In the end, however, you must remember that log files don't lie. Too many times there is a trail that can lead to the truth. When the CTO or your department boss discovers this truth, one that points to you lying, the arm of justice will be that much more forceful. Even though you may feel like your job is in jeopardy, or the truth will cause you added hours of work, always opt for the truth. Always.

  7. Make sure you're passionate about what you're doing

    Ask yourself this question: Am I passionate about technology? If not, get out now; otherwise, that job will beat you down. A passion for technology, on the other hand, will continue to drive you forward. Just know this: The longer you are in the field, the more likely that passion is to falter. To prevent that from happening, learn something new.

  8. Don't stop learning

    Quick-how many operating systems have you gone through over the last decade? No career evolves faster than technology. The second you believe you have something perfected, it changes. If you decide you've learned enough, it's time to give up the keys to your kingdom. Not only will you find yourself behind the curve, all those servers and desktops you manage could quickly wind up vulnerable to every new attack in the wild. Don't fall behind.

  9. When you feel your back against a wall, take a breath and regroup

    This will happen to you. You'll be tasked to upgrade a server farm and one of the upgrades will go south. The sweat will collect, your breathing will reach panic level, and you'll lock up like Windows Me. When this happens... stop, take a breath, and reformulate your plan. Strangely enough, it's that breath taken in the moment of panic that will help you survive the nightmare. If a single, deep breath doesn't help, step outside and take in some fresh air so that you are in a better place to change course.

  10. Don't let clients see you Google a solution

    This should be a no-brainer... but I've watched it happen far too many times. If you're in the middle of something and aren't sure how to fix an issue, don't sit in front of a client and Google the solution. If you have to, step away, tell the client you need to use the restroom and, once in the safety of a stall, use your phone to Google the answer. Clients don't want to know you're learning on their dime.

See also

  • [Dec 14, 2018] Blatant neoliberal propagamda anout "booming US job market" by Danielle Paquette

    That's way too much hype even for WaPo pressitutes... The reality is that you can apply to 50 jobs and did not get a single responce.
    Dec 12, 2018 | www.latimes.com

    Economists report that workers are starting to act like millennials on Tinder: They're ditching jobs with nary a text. "A number of contacts said that they had been 'ghosted,' a situation in which a worker stops coming to work without notice and then is impossible to contact," the Federal Reserve Bank of Chicago noted in December's Beige Book report, which tracks employment trends. Advertisement > National data on economic "ghosting" is lacking. The term, which normally applies to dating, first surfaced on Dictionary.com in 2016. But companies across the country say silent exits are on the rise. Analysts blame America's increasingly tight labor market. Job openings have surpassed the number of seekers for eight straight months, and the unemployment rate has clung to a 49-year low of 3.7% since September. Janitors, baristas, welders, accountants, engineers -- they're all in demand, said Michael Hicks, a labor economist at Ball State University in Indiana. More people may opt to skip tough conversations and slide right into the next thing. "Why hassle with a boss and a bunch of out-processing," he said, "when literally everyone has been hiring?" Recruiters at global staffing firm Robert Half have noticed a 10% to 20% increase in ghosting over the last year, D.C. district President Josh Howarth said. Applicants blow off interviews. New hires turn into no-shows. Workers leave one evening and never return. "You feel like someone has a high level of interest, only for them to just disappear," Howarth said. Over the summer, woes he heard from clients emerged in his own life. A job candidate for a recruiter role asked for a day to mull over an offer, saying she wanted to discuss the terms with her spouse. Then she halted communication. "In fairness," Howarth said, "there are some folks who might have so many opportunities they're considering, they honestly forget." Keith Station, director of business relations at Heartland Workforce Solutions, which connects job hunters with companies in Omaha, said workers in his area are most likely to skip out on low-paying service positions. "People just fall off the face of the Earth," he said of the area, which has an especially low unemployment rate of 2.8%. Some employers in Nebraska are trying to head off unfilled shifts by offering apprentice programs that guarantee raises and additional training over time. "Then you want to stay and watch your wage grow," Station said. Advertisement > Other recruitment businesses point to solutions from China, where ghosting took off during the last decade's explosive growth. "We generally make two offers for every job because somebody doesn't show up," said Rebecca Henderson, chief executive of Randstad Sourceright, a talent acquisition firm. And if both hires stick around, she said, her multinational clients are happy to deepen the bench. Though ghosting in the United States does not yet require that level of backup planning, consultants urge employers to build meaningful relationships at every stage of the hiring process. Someone who feels invested in an enterprise is less likely to bounce, said Melissa and Johnathan Nightingale, who have written about leadership and dysfunctional management. "Employees leave jobs that suck," they said in an email. "Jobs where they're abused. Jobs where they don't care about the work. And the less engaged they are, the less need they feel to give their bosses any warning." Some employees are simply young and restless, said James Cooper, former manager of the Old Faithful Inn at Yellowstone National Park, where he said people ghosted regularly. A few of his staffers were college students who lived in park dormitories for the summer. "My favorite," he said, "was a kid who left a note on the floor in his dorm room that said, 'Sorry bros, had to ghost.' " Other ghosters describe an inner voice that just says: Nah. Zach Keel, a 26-year-old server in Austin, Texas, made the call last year to flee a combination bar and cinema after realizing he would have to clean the place until sunrise. More work, he calculated, was always around the corner. "I didn't call," Keel said. "I didn't show up. I figured: No point in feeling guilty about something that wasn't that big of an issue. Turnover is so high, anyway."

    [Dec 14, 2018] You apply for a job. You hear nothing. Here's what to do next

    Dec 14, 2018 | finance.yahoo.com

    But the more common situation is that applicants are ghosted by companies. They apply for a job and never hear anything in response, not even a rejection. In the U.S., companies are generally not legally obligated to deliver bad news to job candidates, so many don't.

    They also don't provide feedback, because it could open the company up to a legal risk if it shows that they decided against a candidate for discriminatory reasons protected by law such as race, gender or disability.

    Hiring can be a lengthy process, and rejecting 99 candidates is much more work than accepting one. But a consistently poor hiring process that leaves applicants hanging can cause companies to lose out on the best talent and even damage perception of their brand.

    Here's what companies can do differently to keep applicants in the loop, and how job seekers can know that it's time to cut their losses.


    What companies can do differently

    There are many ways that technology can make the hiring process easier for both HR professionals and applicants.

    Only about half of all companies get back to the candidates they're not planning to interview, Natalia Baryshnikova, director of product management on the enterprise product team at SmartRecruiters, tells CNBC Make It .

    "Technology has defaults, one change is in the default option," Baryshnikova says. She said that SmartRecruiters changed the default on its technology from "reject without a note" to "reject with a note," so that candidates will know they're no longer involved in the process.

    Companies can also use technology as a reminder to prioritize rejections. For the company, rejections are less urgent than hiring. But for a candidate, they are a top priority. "There are companies out there that get back to 100 percent of candidates, but they are not yet common," Baryshnikova says.

    How one company is trying to help

    WayUp was founded to make the process of applying for a job simpler.

    "The No. 1 complaint from candidates we've heard, from college students and recent grads especially, is that their application goes into a black hole," Liz Wessel, co-founder and CEO of WayUp, a platform that connects college students and recent graduates with employers, tells CNBC Make It .

    WayUp attempts to increase transparency in hiring by helping companies source and screen applicants, and by giving applicants feedback based on soft skills. They also let applicants know if they have advanced to the next round of interviewing within 24 hours.

    Wessel says that in addition to creating a better experience for applicants, WayUp's system helps companies address bias during the resume-screening processes. Resumes are assessed for hard skills up front, then each applicant participates in a phone screening before their application is passed to an employer. This ensures that no qualified candidate is passed over because their resume is different from the typical hire at an organization – something that can happen in a company that uses computers instead of people to scan resumes .

    "The companies we work with see twice as many minorities getting to offer letter," Wessel said.

    When you can safely assume that no news is bad news

    First, if you do feel that you're being ghosted by a company after sending in a job application, don't despair. No news could be good news, so don't assume right off the bat that silence means you didn't get the job.

    Hiring takes time, especially if you're applying for roles where multiple people could be hired, which is common in entry-level positions. It's possible that an HR team is working through hundreds or even thousands of resumes, and they might not have gotten to yours yet. It is not unheard of to hear back about next steps months after submitting an initial application.

    If you don't like waiting, you have a few options. Some companies have application tracking in their HR systems, so you can always check to see if the job you've applied for has that and if there's been an update to the status of your application.

    Otherwise, if you haven't heard anything, Wessel said that the only way to be sure that you aren't still in the running for the job is to determine if the position has started. Some companies will publish their calendar timelines for certain jobs and programs, so check that information to see if your resume could still be in review.

    "If that's the case and the deadline has passed," Wessel says, it's safe to say you didn't get the job.

    And finally, if you're still unclear on the status of your application, she says there's no problem with emailing a recruiter and asking outright.

    [Dec 05, 2018] How can I scroll up to see the past output in PuTTY?

    Dec 05, 2018 | superuser.com

    Ask Question up vote 3 down vote favorite 1

    user1721949 ,Dec 12, 2012 at 8:32

    I have a script which, when I run it from PuTTY, it scrolls the screen. Now, I want to go back to see the errors, but when I scroll up, I can see the past commands, but not the output of the command.

    How can I see the past output?

    Rico ,Dec 13, 2012 at 8:24

    Shift+Pgup/PgDn should work for scrolling without using the scrollbar.

    > ,Jul 12, 2017 at 21:45

    If shift pageup/pagedown fails, try this command: "reset", which seems to correct the display. – user530079 Jul 12 '17 at 21:45

    RedGrittyBrick ,Dec 12, 2012 at 9:31

    If you don't pipe the output of your commands into something like less , you will be able to use Putty's scroll-bars to view earlier output.

    Putty has settings for how many lines of past output it retains in it's buffer.


    before scrolling

    after scrolling back (upwards)

    If you use something like less the output doesn't get into Putty's scroll buffer


    after using less

    David Dai ,Dec 14, 2012 at 3:31

    why is putty different with the native linux console at this point? – David Dai Dec 14 '12 at 3:31

    konradstrack ,Dec 12, 2012 at 9:52

    I would recommend using screen if you want to have good control over the scroll buffer on a remote shell.

    You can change the scroll buffer size to suit your needs by setting:

    defscrollback 4000
    

    in ~/.screenrc , which will specify the number of lines you want to be buffered (4000 in this case).

    Then you should run your script in a screen session, e.g. by executing screen ./myscript.sh or first executing screen and then ./myscript.sh inside the session.

    It's also possible to enable logging of the console output to a file. You can find more info on the screen's man page .

    ,

    From your descript, it sounds like the "problem" is that you are using screen, tmux, or another window manager dependent on them (byobu). Normally you should be able to scroll back in putty with no issue. Exceptions include if you are in an application like less or nano that creates it's own "window" on the terminal.

    With screen and tmux you can generally scroll back with SHIFT + PGUP (same as you could from the physical terminal of the remote machine). They also both have a "copy" mode that frees the cursor from the prompt and lets you use arrow keys to move it around (for selecting text to copy with just the keyboard). It also lets you scroll up and down with the PGUP and PGDN keys. Copy mode under byobu using screen or tmux backends is accessed by pressing F7 (careful, F6 disconnects the session). To do so directly under screen you press CTRL + a then ESC or [ . You can use ESC to exit copy mode. Under tmux you press CTRL + b then [ to enter copy mode and ] to exit.

    The simplest solution, of course, is not to use either. I've found both to be quite a bit more trouble than they are worth. If you would like to use multiple different terminals on a remote machine simply connect with multiple instances of putty and manage your windows using, er... Windows. Now forgive me but I must flee before I am burned at the stake for my heresy.

    EDIT: almost forgot, some keys may not be received correctly by the remote terminal if putty has not been configured correctly. In your putty config check Terminal -> Keyboard . You probably want the function keys and keypad set to be either Linux or Xterm R6 . If you are seeing strange characters on the terminal when attempting the above this is most likely the problem.

    [Nov 22, 2018] Sorry, Linux. Kubernetes is now the OS that matters InfoWorld

    That's a very primitive thinking. If RHEL is royally screwed, like is the case with RHEL7, that affects Kubernetes -- it does not exists outside the OS
    Nov 22, 2018 | www.infoworld.com
    We now live in a Kubernetes world

    Perhaps Redmonk analyst Stephen O'Grady said it best : "If there was any question in the wake of IBM's $34 billion acquisition of Red Hat and its Kubernetes-based OpenShift offering that it's Kubernetes's world and we're all just living in it, those [questions] should be over." There has been nearly $60 billion in open source M&A in 2018, but most of it revolves around Kubernetes.

    Red Hat, for its part, has long been (rightly) labeled the enterprise Linux standard, but IBM didn't pay for Red Hat Enterprise Linux. Not really.

    [Nov 21, 2018] Linux Shutdown Command 5 Practical Examples Linux Handbook

    Nov 21, 2018 | linuxhandbook.com

    Restart the system with shutdown command

    There is a separate reboot command but you don't need to learn a new command just for rebooting the system. You can use the Linux shutdown command for rebooting as wel.

    To reboot a system using the shutdown command, use the -r option.

    sudo shutdown -r
    

    The behavior is the same as the regular shutdown command. It's just that instead of a shutdown, the system will be restarted.

    So, if you used shutdown -r without any time argument, it will schedule a reboot after one minute.

    You can schedule reboots the same way you did with shutdown.

    sudo shutdown -r +30
    

    You can also reboot the system immediately with shutdown command:

    sudo shutdown -r now
    
    4. Broadcast a custom message

    If you are in a multi-user environment and there are several users logged on the system, you can send them a custom broadcast message with the shutdown command.

    By default, all the logged users will receive a notification about scheduled shutdown and its time. You can customize the broadcast message in the shutdown command itself:

    sudo shutdown 16:00 "systems will be shutdown for hardware upgrade, please save your work"
    

    Fun Stuff: You can use the shutdown command with -k option to initiate a 'fake shutdown'. It won't shutdown the system but the broadcast message will be sent to all logged on users.

    5. Cancel a scheduled shutdown

    If you scheduled a shutdown, you don't have to live with it. You can always cancel a shutdown with option -c.

    sudo shutdown -c
    

    And if you had broadcasted a messaged about the scheduled shutdown, as a good sysadmin, you might also want to notify other users about cancelling the scheduled shutdown.

    sudo shutdown -c "planned shutdown has been cancelled"
    

    Halt vs Power off

    Halt (option -H): terminates all processes and shuts down the cpu .
    Power off (option -P): Pretty much like halt but it also turns off the unit itself (lights and everything on the system).

    Historically, the earlier computers used to halt the system and then print a message like "it's ok to power off now" and then the computers were turned off through physical switches.

    These days, halt should automically power off the system thanks to ACPI .

    These were the most common and the most useful examples of the Linux shutdown command. I hope you have learned how to shut down a Linux system via command line. You might also like reading about the less command usage or browse through the list of Linux commands we have covered so far.

    If you have any questions or suggestions, feel free to let me know in the comment section.

    [Nov 19, 2018] The rise of Shadow IT - Should CIOs take umbrage

    Notable quotes:
    "... Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. ..."
    "... The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions. ..."
    Nov 19, 2018 | cxounplugged.com

    Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. A familiar example of this is BYOD but, significantly, Shadow IT now includes enterprise grade software and hardware, which is increasingly being sourced and managed outside of the direct control of the organisation's IT department and CIO.

    Examples include enterprise wide CRM solutions and marketing automation systems procured by the marketing department, as well as data warehousing, BI and analysis services sourced by finance officers.

    So why have so many technology solutions slipped through the hands of so many CIOs? I believe a confluence of events is behind the trend; there is the obvious consumerisation of IT, which has resulted in non-technical staff being much more aware of possible solutions to their business needs – they are more tech-savvy. There is also the fact that some CIOs and technology departments have been too slow to react to the business's technology needs.

    The reason for this slow reaction is that very often IT Departments are just too busy running day-to-day infrastructure operations such as network and storage management along with supporting users and software. The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions.

    [Nov 18, 2018] Systemd killing screen and tmux

    Nov 18, 2018 | theregister.co.uk

    fobobob , Thursday 10th May 2018 18:00 GMT

    Might just be a Debian thing as I haven't looked into it, but I have enough suspicion towards systemd that I find it worth mentioning. Until fairly recently (in terms of Debian releases), the default configuration was to murder a user's processes when they log out. This includes things such as screen and tmux, and I seem to recall it also murdering disowned and NOHUPed processes as well.
    Tim99 , Thursday 10th May 2018 06:26 GMT
    How can we make money?

    A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-

    Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.

    Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.

    Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.

    Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.

    Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.

    ds6 , 6 months
    Re: How can we make money?

    This is scarily possible and undeserving of the troll icon.

    Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

    DougS , Thursday 10th May 2018 07:30 GMT
    Init did need fixing

    But replacing it with systemd is akin to "fixing" the restrictions of travel by bicycle (limited speed and range, ending up sweaty at your destination, dangerous in heavy traffic) by replacing it with an Apache helicopter gunship that has a whole new set of restrictions (need for expensive fuel, noisy and pisses off the neighbors, need a crew of trained mechanics to keep it running, local army base might see you as a threat and shoot missiles at you)

    Too bad we didn't get the equivalent of a bicycle with an electric motor, or perhaps a moped.

    -tim , Thursday 10th May 2018 07:33 GMT
    Those who do not understand Unix are condemned to reinvent it, poorly.

    "It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

    Poettering and Red Hat,

    Please learn about "Process Groups"

    Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

    Herby , Thursday 10th May 2018 07:42 GMT
    Process 1 IS complicated.

    That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

    The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

    Perhaps a nice book could be written WITH example to show what is going on.

    Now let's see does audio come before or after networking (or at the same time)?

    Chronos , Thursday 10th May 2018 09:12 GMT
    Logging

    If they removed logging from the systemd core and went back to good ol' plaintext syslog[-ng], I'd have very little bad to say about Lennart's monolithic pet project. Indeed, I much prefer writing unit files than buggering about getting rcorder right in the old SysV init.

    Now, if someone wanted to nuke pulseaudio from orbit and do multiplexing in the kernel a la FreeBSD, I'll chip in with a contribution to the warhead fund. Needing a userland daemon just to pipe audio to a device is most certainly a solution in search of a problem.

    Tinslave_the_Barelegged , Thursday 10th May 2018 11:29 GMT
    Re: Logging

    > If they removed logging from the systemd core

    And time syncing

    And name resolution

    And disk mounting

    And logging in

    ...and...

    [Nov 18, 2018] From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    Nov 18, 2018 | theregister.co.uk

    tekHedd , Thursday 10th May 2018 15:28 GMT

    Not UNIX-like? SNU!

    From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

    [Nov 18, 2018] So in all reality, systemd is an answer to a problem that nobody who are administring servers ever had.

    Nov 18, 2018 | theregister.co.uk

    jake , Thursday 10th May 2018 20:23 GMT

    Re: Bah!

    Nice rant. Kinda.

    However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

    So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

    [0] My spall chucker insists that the word should be "testicles". Tempting ...

    [Nov 18, 2018] Thursday 10th May 2018 19:36 GMT

    Nov 18, 2018 | theregister.co.uk

    doug_bostrom


    sisk , Thursday 10th May 2018 21:17 GMT

    It's a pretty polarizing debate: either you see Systemd as a modern, clean, and coherent management toolkit

    Very, very few Linux users see it that way.

    or an unnecessary burden running roughshod over the engineering maxim: if it ain't broke, don't fix it.

    Seen as such by 90% of Linux users because it demonstrably is.

    Truthfully Systemd is flawed at a deeply fundamental level. While there are a very few things it can do that init couldn't - the killing off processes owned by a service mentioned as an example in this article is handled just fine by a well written init script - the tradeoffs just aren't worth it. For example: fscking BINARY LOGS. Even if all of Systemd's numerous other problems were fixed that one would keep it forever on my list of things to avoid if at all possible, and the fact that the Systemd team thought it a good idea to make the logs binary shows some very troubling flaws in their thinking at a very fundamental level.

    Dazed and Confused , Thursday 10th May 2018 21:43 GMT
    Re: fscking BINARY LOGS.

    And config too

    When it comes to logs and config file if you can't grep it then it doesn't belong on Linux/Unix

    Nate Amsden , Thursday 10th May 2018 23:51 GMT
    Re: fscking BINARY LOGS.

    WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

    XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

    I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    Tomato42 , Saturday 12th May 2018 08:26 GMT
    Re: fscking BINARY LOGS.

    > I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    HieronymusBloggs , Saturday 12th May 2018 18:17 GMT
    Re: fscking BINARY LOGS.

    "systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight"

    Journald can't be switched off, only redirected to /dev/null. It still generates binary log data (which has caused me at least one system hang due to the absurd amount of data it was generating on a system that was otherwise functioning correctly) and consumes system resources. That isn't my idea of "works just fine".

    ""I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?"

    Nice straw man. Most of the complaints I've seen have been from experienced people who do know what they're talking about.

    sisk , Tuesday 15th May 2018 20:22 GMT
    Re: fscking BINARY LOGS.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    I have had the displeasure of dealing with journald and it is every bit as bad as everyone says and worse.

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    Yeah, I've tried that. It caused problems. It wasn't a viable option.

    Anonymous Coward , Thursday 10th May 2018 22:30 GMT
    Parking U$5bn in redhad for a few months will fix this...

    So it's now been 4 years since they first tried to force that shoddy desk-top init system into our servers? And yet they still feel compelled to tell everyone, look it really isn't that terrible. That should tell you something. Unless you are tone death like redhat. Surprised people didn't start walking out when Poettering outlined his plans for the next round of systemD power grabs...

    Anyway the only way this farce will end is with shareholder activism. Some hedge fund to buy 10-15 percent of redhat (about the amount you need to make life difficult for management) and force them to sack that "stable genius" Poettering. So market cap is 30bn today. Anyone with 5bn spare to park for a few months wanna step forward and do some good?

    cjcox , Thursday 10th May 2018 22:33 GMT
    He's a pain

    Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

    Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

    Daggerchild , Friday 11th May 2018 08:27 GMT
    Spherical wheel is superior.

    @T42

    Now, you see, you just summed up the whole problem. Like systemd's author, you think you know better than the admin how to run his machine, without knowing, or caring to ask, what he's trying to achieve. Nobody ever runs a computer, to achieve running systemd do they.

    Tomato42 , Saturday 12th May 2018 09:05 GMT
    Re: Spherical wheel is superior.

    I don't claim I know better, but I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running, run file left-over but process dead, service restart – let alone the more obscure ones, like application double forking when it shouldn't (even when that was the failure mode of the application the script was provided with). So maybe, just maybe, you haven't experienced everything there is to experience, so your opinion is subjective?

    Yes, the sides of the discussion should talk more, but this applies to both sides. "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion". So is quoting well known and long discussed (and disproven) points. (and then downvoting people into oblivion for daring to point this things out).

    now in the real world, people that have to deal with init systems on daily basis, as distribution maintainers, by large, have chosen to switch their distributions to systemd, so the whole situation I can sum up one way:

    "the dogs may bark, but the caravan moves on"

    Kabukiwookie , Monday 14th May 2018 00:14 GMT
    Re: Spherical wheel is superior.

    I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

    This only shows that you don't have much real life experience managing lots of hosts.

    like application double forking when it shouldn't

    If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

    "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

    Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

    now in the real world, people that have to deal with init systems on daily basis

    Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

    "the dogs may bark, but the caravan moves on"

    Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

    Daggerchild , Monday 14th May 2018 14:10 GMT
    Re: Spherical wheel is superior.

    "I never saw ... run file left-over but process dead, service restart ..."

    Seriously? I wrote one last week! You use an OS atomic lock on the pidfile and exec the service if the lock succeeded. The lock dies with the process. It's a very small shellscript.

    I shot a systemd controlled service. Systemd put it into error state and wouldn't restart it unless I used the right runes. That is functionally identical to the thing you just complained about.

    "application double forking when it shouldn't"

    I'm going to have to guess what that means, and then point you at DJB's daemontools. You leave a FD open in the child. They can fork all they like. You'll still track when the last dies as the FD will cause an event on final close.

    "So maybe, just maybe, you haven't experienced everything there is to experience"

    You realise that's the conspiracy theorist argument "You don't know everything, therefore I am right". Doubt is never proof of anything.

    "La, la, la, sysv is working fine" is not what you can call "participating in discussion".

    Well, no.. it's called evidence. Evidence that things are already working fine, thanks. Evidence that the need for discussion has not been displayed. Would you like a discussion about the Earth being flat? Why not? Are you refusing to engage in a constructive discussion? How obstructive!

    "now in the real world..."

    In the *real* world people run Windows and Android, so you may want to rethink the "we outnumber you, so we must be right" angle.

    You're claiming an awful lot of highground you don't seem to actually know your way around, while trying to wield arguments you don't want to face yourself...

    "(and then downvoting people into oblivion for daring to point this things out)"

    It's not some denialist conspiracy to suppress your "daring" Truth - you genuinely deserve those downvotes.

    Anonymous Coward , Friday 11th May 2018 17:27 GMT
    I have no idea how or why systemd ended up on servers. Laptops I can see the appeal for "this is the year of the linux desktop" - for when you want your rebooted machine to just be there as fast as possible (or fail mysteriously as fast as possible). Servers, on the other hand, which take in the order of 10+ minutes to get through POST, initialising whatever LOM, disk controllers, and whatever exotica hardware you may also have connected, I don't see a benefit in Linux starting (or failing to start) a wee bit more quickly. You're only going to reboot those beasts when absolutely necessary. And it should boot the same as it booted last time. PID1 should be as simple as possible.

    I only use CentOS these days for FreeIPA but now I'm questioning my life decisions even here. That Debian adopted systemd too is a real shame. It's actually put me off the whole game. Time spent learning systemd is time that could have been spent doing something useful that won't end up randomly breaking with a "will not fix" response.

    Systemd should be taken out back and put out of our misery.

    Miss Config , Saturday 12th May 2018 11:48 GMT
    SystemD ? Was THAT What Buggered My Mint AND Laptop ?

    The technical details of SystemD are over my head but I do use Mint as the main OS on this laptop which makes me Mr. Innocent Bystander in this argument. I had heard of SystemD and even a rumour that Mint was going to use it. That Mint ALREADY is using SystemD is news to me

    ( provided by this article ).

    My problem is that a month ago a boot of Mint failed and after reading this thread I must wonder whether SystemD is at least one of the usual suspects as the cause of the problem ?

    Here's what happened :

    As I do every couple of weeks, I installed the latest available updates from Mint but the next time I booted up it did not get beyond the Mint logo. All I got were terminal-level messages about sudo commands and the ability to enter them. Or rather NOT enter them. Further use of Terminal showed that one system file did not now exist. This was in etc/ and related to the granting of sudo permissions. The fact that it did not exist created a vicious circle and sudo was completely out of action. I took the laptop to a shop where they managed to save my Backups folder that had been on the desktop and install a fresh version of Mint.

    So what are the chances that this was a SystemD problem ?

    GrumpenKraut , Sunday 13th May 2018 10:51 GMT
    Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

    From what you say the file /etc/sudoers got deleted (or corrupted). It may have been some (badly effed up) update.

    Btw. you could have booted from a rescue image (CD or USB stick) and fixed it yourself. Easy when you have a proper backup, not-quite-so-easy when you have to 'manually' recreate that file.

    jake , Monday 14th May 2018 18:28 GMT
    Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

    Probably not systemd. If you were the only one it happened to, and it only happened once, write it off as the proverbial "stray cosmic ray" flipping a bit at an inopportune time during the install. If you can repeat it, this is the wrong forum to address the issue. Try instead https://forums.linuxmint.com/

    That said, if anybody reading this in the future has a similar problem, you can get a working system back by logging in as root[0], using your favorite text editor[1] to create the file /etc/sudoers with the single line root ALL=(ALL) ALL , saving the file and then running chown 644 /etc/sudoers ... logout of root and back into your user account and get on with it. May I suggest starting with backing up all your personal work (pictures, tunes, correspondence, whathaveyou)?

    [0] Yeah, yeah, yeah, I know, don't suggest newbies use root. But if su doesn't work, what would you suggest as an alternative?

    [1] visudo wont work for obvious reasons ... even if it did, would you suggest vi to a newbie? Besides, on a single-user system it's hardly necessary for this kind of brute-force bodge.

    Miss Config , Monday 14th May 2018 18:38 GMT
    Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

    So even those who are paranoid ( rightly or wrongly ) about SystemD did not pile in to blame it here. I'll take that as a 'no'.

    Backup you say ? Tell me about it. I must admit that when it comes to backups I very much talk the talk, full stop.I have since bought a 1TB detachable hard drive which at least makes full backups fast via USB3.

    ( All I need now is software for DIFFERENTIAL backups ).

    jake , Monday 14th May 2018 19:24 GMT
    Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

    Living long enough to have ton of experience is not paranoia (although it can help!). Instead, try the other "P" word ... pragmatism.

    Backups are a vital part of properly running any computerized system. However, I can make a case for simply having multiple copies (off site is good!) of all your important personal files being all that's needed for the average single-user, at home system. The OS can be reinstalled, your pictures and personal correspondence (etc.) cannot.

    [Nov 18, 2018] Just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Notable quotes:
    "... Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option). ..."
    "... I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far). ..."
    "... If systemd is a solution to any set of problems, I'd love to have those problems back! ..."
    Nov 18, 2018 | theregister.co.uk

    Nate Amsden , Thursday 10th May 2018 16:34 GMT

    as a linux user for 22 users

    (20 of which on Debian, before that was Slackware)

    I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

    I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there. If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

    That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

    My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

    Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

    Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

    fucking a. Systemd shut up, just run the damn script. It's not hard.

    Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

    I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

    I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

    But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

    GrumpenKraut , Thursday 10th May 2018 17:52 GMT
    Re: as a linux user for 22 users

    Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

    If systemd is a solution to any set of problems, I'd love to have those problems back!

    [Nov 18, 2018] SystemD is just a symptom of this regression of Red hat into money making machine

    Nov 18, 2018 | theregister.co.uk

    Will Godfrey , Thursday 10th May 2018 16:30 GMT

    Business Model

    Red Hat have definitely taken a lurch to the dark side in recent years. It seems to be the way businesses go.

    They start off providing a service to customers.

    As they grow the customers become users.

    Once they reach a certain point the users become consumers, and at this point it is the 'consumers' that provide a service for the business.

    SystemD is just a symptom of this regression.

    [Nov 18, 2018] Fudging the start-up and restoring eth0

    Truth be told boisdevname abomination is from Dell
    Nov 18, 2018 | theregister.co.uk

    The Electron , Thursday 10th May 2018 12:05 GMT

    Fudging the start-up and restoring eth0

    I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it! I have had to kickstart many of my CentOS 7 builds to disable IPv6 (NFS complains bitterly), kill the incredibly annoying 'biosdevname' that turns sensible eth0/eth1 into some daftly named nonsense, replace Gnome 3 (shudder) with MATE, and fudge start-up processes. In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

    Jay 2 , Thursday 10th May 2018 15:02 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX.

    However on (RHEL?)/CentOS 7 I've found that if you build a server like that, and then try to renam/swap the interfaces it will refuse point blank to allow you to swap the interfaces round so that something else can be eth0. In the end we just gave up and renamed everything lanX instead which it was quite happy with.

    HieronymusBloggs , Thursday 10th May 2018 16:23 GMT
    Re: Predictable names

    "I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX."

    I'm using this on my Debian 9 systems. IIRC the option to do so will be removed in Debian 10.

    Dazed and Confused , Thursday 10th May 2018 19:21 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both)

    It's Dell. I got the impression that much of this work had been done, at least, in conjunction with Dell.

    [Nov 18, 2018] The beatings will continue until morale improves.

    Nov 18, 2018 | theregister.co.uk

    Doctor Syntax , Thursday 10th May 2018 10:26 GMT

    "The more people learn about it, the more they like it."

    Translation: We define those who don't like it as not have learned enough about it.

    ROC , Friday 11th May 2018 17:32 GMT
    Alternate translation:

    The beatings will continue until morale improves.

    [Nov 18, 2018] I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life

    Nov 18, 2018 | theregister.co.uk

    AJ MacLeod , Thursday 10th May 2018 13:51 GMT

    @Sheepykins

    I'm not really bothered about whether init was perfect from the beginning - for as long as I've been using Linux (20 years) until now, I have never known the init system to be the cause of major issues. Since in my experience it's not been seriously broken for two decades, why throw it out now for something that is orders of magnitude more complex and ridiculously overreaching?

    Like many here I bet, I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life - but this is also the first time I can recall ever having serious unpredictable issues with startup and shutdown on Linux servers.


    stiine, Thursday 10th May 2018 15:38 GMT

    sysV init

    I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

    /rant

    [Nov 18, 2018] systems helps with mounting NSF4 filesystems

    Nov 18, 2018 | theregister.co.uk

    Chronos , Thursday 10th May 2018 13:32 GMT

    Re: Logging

    And disk mounting

    Well, I am compelled to agree with most everything you wrote except one niche area that systemd does better: Remember putzing about with the amd? One line in fstab:

    nasbox:/srv/set0 /nas nfs4 _netdev,noauto,nolock,x-systemd.automount,x-systemd.idle-timeout=1min 0 0
    

    Bloody thing only works and nobody's system comes grinding to a halt every time some essential maintenance is done on the NAS.

    Candour compels me to admit surprise that it worked as advertised, though.

    DCFusor , Thursday 10th May 2018 13:58 GMT

    Re: Logging

    No worries, as has happened with every workaround to make systemD simply mount cifs or NFS at boot, yours will fail as soon as the next change happens, yet it will remain on the 'net to be tried over and over as have all the other "fixes" for Poettering's arrogant breakages.

    The last one I heard from him on this was "don't mount shares at boot, it's not reliable WONTFIX".

    Which is why we're all bitching.

    Break my stuff.

    Web shows workaround.

    Break workaround without fixing the original issue, really.

    Never ensure one place for current dox on what works now.

    Repeat above endlessly.

    Fine if all you do is spin up endless identical instances in some cloud (EG a big chunk of RH customers - but not Debian for example). If like me you have 20+ machines customized to purpose...for which one workaround works on some but not others, and every new release of systemD seems to break something new that has to be tracked down and fixed, it's not acceptable - it's actually making proprietary solutions look more cost effective and less blood pressure raising.

    The old init scripts worked once you got them right, and stayed working. A new distro release didn't break them, nor did a systemD update (because there wasn't one). This feels more like sabotage.

    [Nov 18, 2018] Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error

    Nov 18, 2018 | theregister.co.uk

    Dabbb , Thursday 10th May 2018 10:16 GMT

    Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

    Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

    [Nov 18, 2018] You love Systemd you just don't know it yet, wink Red Hat bods

    Nov 18, 2018 | theregister.co.uk

    Anonymous Coward , Thursday 10th May 2018 02:58 GMT

    Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    "And perhaps, in the process, you may warm up a bit more to the tool"

    Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd?

    I love the fact that they aren't trying to address the huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever expanding sprawl.

    The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security problems. In short were stuck cleaning up his mess, and the consequences of his security blunders

    A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process like the BSD world.

    Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts to fix them. The flame wars not going away till he does.

    asdf , Thursday 10th May 2018 23:38 GMT
    Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

    whitepines , Thursday 10th May 2018 03:47 GMT
    Raise your hand if you've been completely locked out of a server or laptop (as in, break out the recovery media and settle down, it'll be a while) because systemd:

    1.) Couldn't raise a network interface

    2.) Farted and forgot the UUID for a disk, then refused to give a recovery shell

    3.) Decided an unimportant service (e.g. CUPS or avahi) was too critical to start before giving a login over SSH or locally, then that service stalls forever

    4.) Decided that no, you will not be network booting your server today. No way to recover and no debug information, just an interminable hang as it raises wrong network interfaces and waits for DHCP addresses that will never come.

    And lest the fun be restricted to startup, on shutdown systemd can quite happily hang forever doing things like stopping nonessential services, *with no timeout and no way to interrupt*. Then you have to Magic Sysreq the machine, except that sometimes secure servers don't have that ability, at least not remotely. Cue data loss and general excitement.

    And that's not even going into the fact that you need to *reboot the machine* to patch the *network enabled* and highly privileged systemd, or that it seems to have the attack surface of Jupiter.

    Upstart was better than this. SysV was better than this. Mac is better than this. Windows is better than this.

    Uggh.

    Daggerchild , Thursday 10th May 2018 11:39 GMT
    Re: Ahhh SystemD

    I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

    Tridac , Thursday 10th May 2018 11:53 GMT
    Re: Ahhh SystemD

    Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. Afaics, systemd is a power grab by red hat and an ego trip for it's primary developer. Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

    [Nov 17, 2018] hh command man page

    Later was renamed to hstr
    Notable quotes:
    "... Favorite and frequently used commands can be bookmarked ..."
    Nov 17, 2018 | www.mankier.com

    hh -- easily view, navigate, sort and use your command history with shell history suggest box.

    Synopsis

    hh [option] [arg1] [arg2]...
    hstr [option] [arg1] [arg2]...

    Description

    hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

    Options
    -h --help
    Show help
    -n --non-interactive
    Print filtered history on standard output and exit
    -f --favorites
    Show favorites view immediately
    -s --show-configuration
    Show configuration that can be added to ~/.bashrc
    -b --show-blacklist
    Show blacklist of commands to be filtered out before history processing
    -V --version
    Show version information
    Keys
    pattern
    Type to filter shell history.
    Ctrl-e
    Toggle regular expression and substring search.
    Ctrl-t
    Toggle case sensitive search.
    Ctrl-/ , Ctrl-7
    Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
    Ctrl-f
    Add currently selected command to favorites.
    Ctrl-l
    Make search pattern lowercase or uppercase.
    Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
    Navigate in the history list.
    TAB , RIGHT arrow
    Choose currently selected item for completion and let user to edit it on the command prompt.
    LEFT arrow
    Choose currently selected item for completion and let user to edit it in editor (fix command).
    ENTER
    Choose currently selected item for completion and execute it.
    DEL
    Remove currently selected item from the shell history.
    BACSKSPACE , Ctrl-h
    Delete last pattern character.
    Ctrl-u , Ctrl-w
    Delete pattern and search again.
    Ctrl-x
    Write changes to shell history and exit.
    Ctrl-g
    Exit with empty prompt.
    Environment Variables

    hh defines the following environment variables:

    HH_CONFIG
    Configuration options:

    hicolor
    Get more colors with this option (default is monochromatic).

    monochromatic
    Ensure black and white view.

    prompt-bottom
    Show prompt at the bottom of the screen (default is prompt at the top).

    regexp
    Filter command history using regular expressions (substring match is default)

    substring
    Filter command history using substring.

    keywords
    Filter command history using keywords - item matches if contains all keywords in pattern in any order.

    casesensitive
    Make history filtering case sensitive (it's case insensitive by default).

    rawhistory
    Show normal history as a default view (metric-based view is shown otherwise).

    favorites
    Show favorites as a default view (metric-based view is shown otherwise).

    duplicates
    Show duplicates in rawhistory (duplicates are discarded by default).

    blacklist
    Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

    big-keys-skip
    Skip big history entries i.e. very long lines (default).

    big-keys-floor
    Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

    big-keys-exit
    Exit (fail) on presence of a big key in history (big keys are skipped by default).

    warning
    Show warning.

    debug
    Show debug information.

    Example:
    export HH_CONFIG=hicolor,regexp,rawhistory

    HH_PROMPT
    Change prompt string which is user@host$ by default.

    Example:
    export HH_PROMPT="$ "

    Files
    ~/.hh_favorites
    Bookmarked favorite commands.
    ~/.hh_blacklist
    Command blacklist.
    Bash Configuration

    Optionally add the following lines to ~/.bashrc:

    export HH_CONFIG=hicolor         # get more colors
    shopt -s histappend              # append new history items to .bash_history
    export HISTCONTROL=ignorespace   # leading space hides commands from history
    export HISTFILESIZE=10000        # increase history file size (default is 500)
    export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
    export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
    # if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
    if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi
    

    The prompt command ensures synchronization of the history between BASH memory and history file.

    ZSH Configuration

    Optionally add the following lines to ~/.zshrc:

    export HISTFILE=~/.zsh_history   # ensure history file visibility
    export HH_CONFIG=hicolor         # get more colors
    bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
    
    Examples
    hh git
    Start `hh` and show only history items containing 'git'.
    hh --non-interactive git
    Print history items containing 'git' to standard output and exit.
    hh --show-configuration >> ~/.bashrc
    Append default hh configuration to your Bash profile.
    hh --show-blacklist
    Show blacklist configured for history processing.
    Author

    Written by Martin Dvorak <martin.dvorak@mindforger.com>

    Bugs

    Report bugs to https://github.com/dvorka/hstr/issues

    See Also

    history(1), bash(1), zsh(1)

    Referenced By

    The man page hstr(1) is an alias of hh(1).

    [Nov 15, 2018] Is Glark a Better Grep Linux.com The source for Linux information

    Notable quotes:
    "... stringfilenames ..."
    Nov 15, 2018 | www.linux.com

    Is Glark a Better Grep? GNU grep is one of my go-to tools on any Linux box. But grep isn't the only tool in town. If you want to try something a bit different, check out glark a grep alternative that might might be better in some situations.

    What is glark? Basically, it's a utility that's similar to grep, but it has a few features that grep does not. This includes complex expressions, Perl-compatible regular expressions, and excluding binary files. It also makes showing contextual lines a bit easier. Let's take a look.

    I installed glark (yes, annoyingly it's yet another *nix utility that has no initial cap) on Linux Mint 11. Just grab it with apt-get install glark and you should be good to go.

    Simple searches work the same way as with grep : glark stringfilenames . So it's pretty much a drop-in replacement for those.

    But you're interested in what makes glark special. So let's start with a complex expression, where you're looking for this or that term:

    glark -r -o thing1 thing2 *

    This will search the current directory and subdirectories for "thing1" or "thing2." When the results are returned, glark will colorize the results and each search term will be highlighted in a different color. So if you search for, say "Mozilla" and "Firefox," you'll see the terms in different colors.

    You can also use this to see if something matches within a few lines of another term. Here's an example:

    glark --and=3 -o Mozilla Firefox -o ID LXDE *

    This was a search I was using in my directory of Linux.com stories that I've edited. I used three terms I knew were in one story, and one term I knew wouldn't be. You can also just use the --and option to spot two terms within X number of lines of each other, like so:

    glark --and=3 term1 term2

    That way, both terms must be present.

    You'll note the --and option is a bit simpler than grep's context line options. However, glark tries to stay compatible with grep, so it also supports the -A , -B and -C options from grep.

    Miss the grep output format? You can tell glark to use grep format with the --grep option.

    Most, if not all, GNU grep options should work with glark .

    Before and After

    If you need to search through the beginning or end of a file, glark has the --before and --after options (short versions, -b and -a ). You can use these as percentages or as absolute number of lines. For instance:

    glark -a 20 expression *

    That will find instances of expression after line 20 in a file.

    The glark Configuration File

    Note that you can have a ~/.glarkrc that will set common options for each use of glark (unless overridden at the command line). The man page for glark does include some examples, like so:

    after-context:     1
    before-context:    6
    context:           5
    file-color:        blue on yellow
    highlight:         off
    ignore-case:       false
    quiet:             yes
    text-color:        bold reverse
    line-number-color: bold
    verbose:           false
    grep:              true
    

    Just put that in your ~/.glarkrc and customize it to your heart's content. Note that I've set mine to grep: false and added the binary-files: without-match option. You'll definitely want the quiet option to suppress all the notes about directories, etc. See the man page for more options. It's probably a good idea to spend about 10 minutes on setting up a configuration file.

    Final Thoughts

    One thing that I have noticed is that glark doesn't seem as fast as grep . When I do a recursive search through a bunch of directories containing (mostly) HTML files, I seem to get results a lot faster with grep . This is not terribly important for most of the stuff I do with either utility. However, if you're doing something where performance is a major factor, then you may want to see if grep fits the bill better.

    Is glark "better" than grep? It depends entirely on what you're doing. It has a few features that give it an edge over grep, and I think it's very much worth trying out if you've never given it a shot.

    [Nov 13, 2018] GridFTP : User s Guide

    Notable quotes:
    "... file:///path/to/my/file ..."
    "... gsiftp://hostname/path/to/remote/file ..."
    "... third party transfer ..."
    toolkit.globus.org

    Table of Contents

    1. Introduction
    2. Usage scenarios
    2.1. Basic procedure for using GridFTP (globus-url-copy)
    2.2. Accessing data in...
    3. Command line tools
    4. Graphical user interfaces
    4.1. Globus GridFTP GUI
    4.2. UberFTP
    5. Security Considerations
    5.1. Two ways to configure your server
    5.2. New authentication options
    5.3. Firewall requirements
    6. Troubleshooting
    6.1. Establish control channel connection
    6.2. Try running globus-url-copy
    6.3. If your server starts...
    7. Usage statistics collection by the Globus Alliance
    1. Introduction The GridFTP User's Guide provides general end user-oriented information. 2. Usage scenarios 2.1. Basic procedure for using GridFTP (globus-url-copy) If you just want the "rules of thumb" on getting started (without all the details), the following options using globus-url-copy will normally give acceptable performance:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 source_url destination_url
    
    The source/destination URLs will normally be one of the following: 2.1.1. Putting files One of the most basic tasks in GridFTP is to "put" files, i.e., moving a file from your file system to the server. So for example, if you want to move the file /tmp/foo from a file system accessible to the host on which you are running your client to a file name /tmp/bar on a host named remote.machine.my.edu running a GridFTP server, you would use this command:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 file:///tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
    
    [Note] Note
    In theory, remote.machine.my.edu could be the same host as the one on which you are running your client, but that is normally only done in testing situations.
    2.1.2. Getting files A get, i.e, moving a file from a server to your file system, would just reverse the source and destination URLs:
    [Tip] Tip
    Remember file: always refers to your file system.
    globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://remote.machine.my.edu/tmp/bar file:///tmp/foo
    
    2.1.3. Third party transfers Finally, if you want to move a file between two GridFTP servers (a third party transfer ), both URLs would use gsiftp: as the protocol:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://other.machine.my.edu/tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
    
    2.1.4. For more information If you want more information and details on URLs and the command line options , the Key Concepts Guide gives basic definitions and an overview of the GridFTP protocol as well as our implementation of it. 2.2. Accessing data in... 2.2.1. Accessing data in a non-POSIX file data source that has a POSIX interface If you want to access data in a non-POSIX file data source that has a POSIX interface, the standard server will do just fine. Just make sure it is really POSIX-like (out of order writes, contiguous byte writes, etc). 2.2.2. Accessing data in HPSS The following information is helpful if you want to use GridFTP to access data in HPSS. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.2.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.2.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.2.3. Data Storage Interface (DSI) / Data Transform module
    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
    2.2.2.4. HPSS info
    Last Update: August 2005 Working with Los Alamos National Laboratory and the High Performance Storage System (HPSS) collaboration ( http://www.hpss-collaboration.org ), we have written a Data Storage Interface (DSI) for read/write access to HPSS. This DSI would allow an existing application that uses a GridFTP compliant client to utilize an HPSS data resources. This DSI is currently in testing. Due to changes in the HPSS security mechanisms, it requires HPSS 6.2 or later, which is due to be released in Q4 2005. Distribution for the DSI has not been worked out yet, but it will *probably* be available from both Globus and the HPSS collaboration. While this code will be open source, it requires underlying HPSS libraries which are NOT open source (proprietary).
    [Note] Note
    This is a purely server side change, the client does not know what DSI is running, so only a site that is already running HPSS and wants to allow GridFTP access needs to worry about access to these proprietary libraries.
    2.2.3. Accessing data in SRB The following information is helpful if you want to use GridFTP to access data in SRB. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.3.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.3.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.3.3. Data Storage Interface (DSI) / Data Transform module
    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
    2.2.3.4. SRB info
    Last Update: August 2005 Working with the SRB team at the San Diego Supercomputing Center, we have written a Data Storage Interface (DSI) for read/write access to data in the Storage Resource Broker (SRB) (http://www.npaci.edu/DICE/SRB). This DSI will enable GridFTP compliant clients to read and write data to an SRB server, similar in functionality to the sput/sget commands. This DSI is currently in testing and is not yet publicly available, but will be available from both the SRB web site (here) and the Globus web site (here). It will also be included in the next stable release of the toolkit. We are working on performance tests, but early results indicate that for wide area network (WAN) transfers, the performance is comparable. When might you want to use this functionality: 2.2.4. Accessing data in some other non-POSIX data source The following information is helpful If you want to use GridFTP to access data in a non-POSIX data source. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.4.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.4.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.4.3. Data Storage Interface (DSI) / Data Transform module
    Nov 13, 2018 | toolkit.globus.org

    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN).

    The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc..

    Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly. 3. Command line tools

    Please see the GridFTP Command Reference .

    [Nov 13, 2018] Resuming rsync partial (-P/--partial) on a interrupted transfer

    Notable quotes:
    "... should ..."
    May 15, 2013 | stackoverflow.com

    Glitches , May 15, 2013 at 18:06

    I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning.

    Here is my command:

    rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp"

    When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 .

    Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume.

    How do I fix this so I don't have to manually intervene each time?

    Richard Michael , Nov 6, 2013 at 4:26

    TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace .

    The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data.

    If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.

    If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).

    Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well.

    For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.

    I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.

    If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).

    Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume.

    Finally, a few short remarks:

    JamesTheAwesomeDude , Dec 29, 2013 at 16:50

    Just curious: wouldn't SIGINT (aka ^C ) be 'politer' than SIGTERM ? – JamesTheAwesomeDude Dec 29 '13 at 16:50

    Richard Michael , Dec 29, 2013 at 22:34

    I didn't test how the server-side rsync handles SIGINT, so I'm not sure it will keep the partial file - you could check. Note that this doesn't have much to do with Ctrl-c ; it happens that your terminal sends SIGINT to the foreground process when you press Ctrl-c , but the server-side rsync has no controlling terminal. You must log in to the server and use kill . The client-side rsync will not send a message to the server (for example, after the client receives SIGINT via your terminal Ctrl-c ) - might be interesting though. As for anthropomorphizing, not sure what's "politer". :-) – Richard Michael Dec 29 '13 at 22:34

    d-b , Feb 3, 2015 at 8:48

    I just tried this timeout argument rsync -av --delete --progress --stats --human-readable --checksum --timeout=60 --partial-dir /tmp/rsync/ rsync://$remote:/ /src/ but then it timed out during the "receiving file list" phase (which in this case takes around 30 minutes). Setting the timeout to half an hour so kind of defers the purpose. Any workaround for this? – d-b Feb 3 '15 at 8:48

    Cees Timmerman , Sep 15, 2015 at 17:10

    @user23122 --checksum reads all data when preparing the file list, which is great for many small files that change often, but should be done on-demand for large files. – Cees Timmerman Sep 15 '15 at 17:10

    [Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

    Jun 25, 2012 | www.cyberciti.biz

    How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

    You can the following programs to find out about port numbers and its associated process:

    1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
    2. fuser – a command line tool to identify processes using files or sockets.
    3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
    4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

    You must run above command(s) as the root user.

    netstat example

    Type the following command:
    # netstat -tulpn
    Sample outputs:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
    tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
    tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
    tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
    tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
    tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
    udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
    udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
    udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
    udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd
    

    TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
    # ls -l /proc/1138/exe
    Sample outputs:

    lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld
    

    You can use grep command to filter out information:
    # netstat -tulpn | grep :80
    Sample outputs:

    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
    
    Video demo

    https://www.youtube.com/embed/h3fJlmuGyos

    fuser command

    Find out the processes PID that opened tcp port 7000, enter:
    # fuser 7000/tcp
    Sample outputs:

    7000/tcp:             3813
    

    Finally, find out process name associated with PID # 3813, enter:
    # ls -l /proc/3813/exe
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission
    

    /usr/bin/transmission is a bittorrent client, enter:
    # man transmission
    OR
    # whatis transmission
    Sample outputs:

    transmission (1)     - a bittorrent client
    
    Task: Find Out Current Working Directory Of a Process

    To find out current working directory of a process called bittorrent or pid 3813, enter:
    # ls -l /proc/3813/cwd
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek
    

    OR use pwdx command, enter:
    # pwdx 3813
    Sample outputs:

    3813: /home/vivek
    
    Task: Find Out Owner Of a Process

    Use the following command to find out the owner of a process PID called 3813:
    # ps aux | grep 3813
    OR
    # ps aux | grep '[3]813'
    Sample outputs:

    vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission
    

    OR try the following ps command:
    # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
    Sample outputs:

    3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010
    

    Another option is /proc/$PID/environ, enter:
    # cat /proc/3813/environ
    OR
    # grep --color -w -a USER /proc/3813/environ
    Sample outputs (note –colour option):

    Fig.01: grep output
    Fig.01: grep output

    lsof Command Example

    Type the command as follows:

    lsof -i :portNumber 
    lsof -i tcp:portNumber 
    lsof -i udp:portNumber 
    lsof -i :80
    lsof -i :80 | grep LISTEN
    

    lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

    Sample outputs:

    apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    

    Now, you get more information about pid # 1607 or 1616 and so on:
    # ps aux | grep '[1]616'
    Sample outputs:
    www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
    I recommend the following command to grab info about pid # 1616:
    # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
    Sample outputs:

    1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010
    

    Where,

    Help: I Discover an Open Port Which I Don't Recognize At All

    The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
    $ grep port /etc/services
    $ grep 443 /etc/services

    Sample outputs:

    https		443/tcp				# http protocol over TLS/SSL
    https		443/udp
    
    Check For rootkit

    I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

    Keep an Eye On Your Bandwidth Graphs

    Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

    See also:

    See the following man pages for more information:
    $ man ps
    $ man grep
    $ man lsof
    $ man netstat
    $ man fuser

    Posted by: Vivek Gite

    The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

    [Nov 12, 2018] Shell Games Linux Magazine

    Nov 12, 2018 | www.linux-magazine.com

    First pdsh Commands

    To begin, I'll try to get the kernel version of a node by using its IP address:

    $ pdsh -w 192.168.1.250 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    The -w option means I am specifying the node(s) that will run the command. In this case, I specified the IP address of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is uname -r in this case. Notice that pdsh starts the output line by identifying the node name.

    If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,

    $ pdsh -w ssh:laytonjb@192.168.1.250 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.

    A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file hosts that lists the hosts I want to use:

    [laytonjb@home4 ~]$ mkdir PDSH
    [laytonjb@home4 ~]$ cd PDSH
    [laytonjb@home4 PDSH]$ vi hosts
    [laytonjb@home4 PDSH]$ more hosts
    192.168.1.4
    192.168.1.250
    

    I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable WCOLL in your .bashrc file:

    export WCOLL=/home/laytonjb/PDSH/hosts
    

    As before, you can source your .bashrc file, or you can log out and log back in. Specifying Hosts

    I won't list all the several other ways to specify a list of nodes, because the pdsh website [9] discusses virtually all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use the -w option:

    $ pdsh -w 192.168.1.4,192.168.1.250 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:

    pdsh -w host[1-11]
    pdsh -w host[1-4,8-11]
    

    In the first case, pdsh expands the host range to host1, host2, host3, , host11. In the second case, it expands the hosts similarly (host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions [10] .

    Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in Listing 2 tells pdsh to take the hostnames from the file /tmp/hosts , which is listed after -w ^ (with no space between the "^" and the filename). You can also use several host files,

    Listing 2 Read Hosts from File
    $ more /tmp/hosts
    192.168.1.4
    $ more /tmp/hosts2
    192.168.1.250
    $ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    or you can exclude hosts from a list:

    $ pdsh -w -192.168.1.250 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    

    The option -w -192.168.1.250 excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4. You can also exclude nodes using a node file:

    $ pdsh -w -^/tmp/hosts2 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    

    In this case, /tmp/hosts2 contains 192.168.1.250, which isn't included in the output. Using the -x option with a hostname,

    $ pdsh -x 192.168.1.4 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    $ pdsh -x ^/tmp/hosts uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    $ more /tmp/hosts
    192.168.1.4
    

    or a list of hostnames to be excluded from the command to run also works.

    More Useful pdsh Commands

    Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes ( Listing 3 ). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first ( cat /proc/cpuinfo ) and second ( grep bogomips ) parts.

    Listing 3 Quotation Marks 1

    In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.

    You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production and check it.

    A slightly different command shown in Listing 4 runs the first part contained in quotes, cat /proc/cpuinfo , on each node and the second part of the command, grep bogomips , on the node on which you issue the pdsh command.

    Listing 4 Quotation Marks 2

    The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands could have differences that might be difficult to notice.

    One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in Listing 5 , I run vmstat on each node and get three lines of output from each node.

    [Nov 12, 2018] Edge Computing vs. Cloud Computing What's the Difference by Andy Patrizio ,

    "... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
    Notable quotes:
    "... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
    "... Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work. ..."
    "... Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet." ..."
    Jan 23, 2018 | www.datamation.com
    Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business

    The term cloud computing is now as firmly lodged in our technical lexicon as email and Internet, and the concept has taken firm hold in business as well. By 2020, Gartner estimates that a "no cloud" policy will be as prevalent in business as a "no Internet" policy. Which is to say no one who wants to stay in business will be without one.

    You are likely hearing a new term now, edge computing . One of the problems with technology is terms tend to come before the definition. Technologists (and the press, let's be honest) tend to throw a word around before it is well-defined, and in that vacuum come a variety of guessed definitions, of varying accuracy.

    Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work.

    Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet."

    It is typically used in IoT use cases, where edge devices collect data from IoT devices and do the processing there, or send it back to a data center or the cloud for processing. Edge computing takes some of the load off the central data center, reducing or even eliminating the processing work at the central location.

    IoT Explosion in the Cloud Era

    To understand the need for edge computing you must understand the explosive growth in IoT in the coming years, and it is coming on big. There have been a number of estimates of the growth in devices, and while they all vary, they are all in the billions of devices.

    This is taking place in a number of areas, most notably cars and industrial equipment. Cars are becoming increasingly more computerized and more intelligent. Gone are the days when the "Check engine" warning light came on and you had to guess what was wrong. Now it tells you which component is failing.

    The industrial sector is a broad one and includes sensors, RFID, industrial robotics, 3D printing, condition monitoring, smart meters, guidance, and more. This sector is sometimes called the Industrial Internet of Things (IIoT) and the overall market is expected to grow from $93.9 billion in 2014 to $151.01 billion by 2020.

    All of these sensors are taking in data but they are not processing it. Your car does some of the processing of sensor data but much of it has to be sent in to a data center for computation, monitoring and logging.

    The problem is that this would overload networks and data centers. Imaging the millions of cars on the road sending in data to data centers around the country. The 4G network would be overwhelmed, as would the data centers. And if you are in California and the car maker's data center is in Texas, that's a long round trip.

    [Nov 09, 2018] Cloud-hosted date must be accessed by users over existing WAN which creates performance issues due to bandwidth and latency constraints

    Notable quotes:
    "... Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps ..."
    Nov 09, 2018 | www.eiseverywhere.com

    However, cloud-hosted information assets must still be accessed by users over existing WAN infrastructures, where there are performance issues due to bandwidth and latency constraints.

    THE EXTREMELY UNFUNNY PART - UP TO 20x SLOWER

    Public/Private

    Cloud

    Thousands of companies
    Millions of users
    Varied bandwidth

    ♦ Per-unit provisioning costs does not decrease much with size after, say, 100 units.

    > Cloud data centers are potentially "far away"

    ♦ Cloud infrastructure supports many enterprises

    ♦ Large scale drives lower per-unit cost for data center
    services

    > All employees will be "remote" from their data

    ♦ Even single-location companies will be remote from their data

    ♦ HQ employees previously local to servers, but not with Cloud model

    > Lots of data needs to be sent over limited WAN bandwidth

    Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps

    > Disk-based deduplication technology

    ♦ Identify redundant data at the byte level, not application (e.g., file) level

    ♦ Use disks to store vast dictionaries of byte sequences for long periods of time

    ♦ Use symbols to transfer repetitive sequences of byte-level raw data

    ♦ Only deduplicated data stored on disk

    [Nov 09, 2018] Troubleshoot WAN Performance Issues SD Wan Experts by Steve Garson

    Feb 08, 2013 | www.sd-wan-experts.com

    Troubleshooting MPLS Networks

    How should you troubleshoot WAN performance issues. Your MPLS or VPLS network and your clients in field offices are complaining about slow WAN performance. Your network should be performing better and you can't figure out what the problem is. You can contact SD-WAN-Experts to have their engineers solve your problem, but you want to try to solve the problems yourself.

    1. The first thing to check, seems trivial, but you need to confirm that the ports on your router and switch ports are configured for the same speed and duplex. Log into your switches and check the logs for mismatches of speed or duplex. Auto-negotiation sometimes does not work properly, so a 10M port connected to a 100M port is mismatched. Or you might have a half-duplex port connected to a full-duplex port. Don't assume that a 10/100/1000 port is auto-negotiating correctly!
    2. Is your WAN performance problem consistent? Does it occur at roughly the same time of day? Or is it completely random? If you don't have the monitoring tools to measure this, you are at a big disadvantage in resolving the issues on your own.
    3. Do you have Class of Service configured on your WAN? Do you have DSCP configured on your LAN? What is the mapping of your DSCP values to CoS?
    4. What kind of applications are traversing your WAN? Are there specific apps that work better than others?
    5. Have your reviewed bandwidth utilization on your carrier's web portal to determine if you are saturating the MPLS port of any locations? Even brief peaks will be enough to generate complaints. Large files, such as CAD drawings, can completely saturate a WAN link.
    6. Are you backing up or synchronizing data over the WAN? Have you confirmed 100% that this work is completed before the work day begins.
    7. Might your routing be taking multiple paths and not the most direct path? Look at your routing tables.
    8. Next, you want to see long term trend statistics. This means monitoring the SNMP streams from all your routers, using tools such as MRTG, NTOP or Cacti. A two week sampling should provide a very good picture of what is happening on your network to help troubleshoot your WAN.

    NTOP allows you to

    MRTG (Multi-Router Traffic Grapher) provides easy to understand graphs of your network bandwidth utilization.

    MRTG Picture

    Cacti requires a MySQL database. It is a complete network graphing solution designed to harness the power of RRDTool 's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

    Both NTOP and MRTG are freeware applications to help troubleshoot your WAN that will run on the freeware versions of Linux. As a result, they can be installed on almost any desktop computer that has out-lived its value as a Windows desktop machine. If you are skilled with Linux and networking, and you have the time, you can install this monitoring system on your own. You will need to get your carrier to provide read-only access to your router SNMP traffic.

    But you might find it more cost effective to have the engineers at SD-WAN-Experts do the work for you. All you need to do is provide an available machine with a Linux install (Ubuntu, CentOS, RedHat, etc) with remote access via a VPN. Our engineers will then download all the software remotely, install and configure the machine. When we are done with the monitoring, beside understanding how to solve your problem (and solving it!) you will have your own network monitoring system installed for your use on a daily basis. We'll teach you how to use it, which is quite simple using the web based tools, so you can view it from any machine on your network.

    If you need assistance in troubleshooting your wide area network, contact SD-WAN-Experts today !

    You might also find these troubleshooting tips of interest;

    Troubleshooting MPLS Network Performance Issues

    Packet Loss and How It Affects Performance

    Troubleshooting VPLS and Ethernet Tunnels over MPLS

    [Nov 09, 2018] Storage in private clouds

    Nov 09, 2018 | www.redhat.com

    Storage in private clouds

    Storage is one of the most popular uses of cloud computing, particularly for consumers. The user-friendly design of service-based companies have helped make "cloud" a pretty normal term -- even reaching meme status in 2016.

    However, cloud storage means something very different to businesses. Big data and the Internet of Things (IoT) have made it difficult to appraise the value of data until long after it's originally stored -- when finding that piece of data becomes the key to revealing valuable business insights or unlocking an application's new feature. Even after enterprises decide where to store their data in the cloud (on-premise, off-premise, public, or private), they still have to decide how they're going to store it. What good is data that can't be found?

    It's common to store data in the cloud using software-defined storage . Software-defined storage decouples storage software from hardware so you can abstract and consolidate storage capacity in a cloud. It allows you to scale beyond whatever individual hardware components your cloud is built on.

    Two of the more common software-defined storage solutions include Ceph for structured data and Gluster for unstructured data. Ceph is a massively scalable, programmable storage system that works well with clouds -- particularly those deployed using OpenStack ® -- because of its ability to unify object, block, and file storage into 1 pool of resources. Gluster is designed to handle the requirements of traditional file storage and is particularly adept at provisioning and managing elastic storage for container-based applications.

    [Nov 09, 2018] Cloud Computing vs Edge Computing Which Will Prevail

    Notable quotes:
    "... The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing. ..."
    "... For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles ..."
    "... the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud. ..."
    Nov 09, 2018 | www.lannerinc.com

    The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing.

    In fact, there have been announcements from global tech leaders like Nokia and Huawei demonstrating increased efforts and resources in developing edge computing.

    For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles.

    ... ... ...

    Cloud or edge, which will lead the future?

    The answer to that question is "Cloud – Edge Mixing". The cloud and the edge will complement each other to offer the real IoT experience. For instance, while the cloud coordinates all the technology and offers SaaS to users, the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud.

    It is strongly suggested to implement open architecture white-box servers for both cloud and edge, to minimize the latency for cloud-edge synchronization and optimize the compatibility between the two. For example, Lanner Electronics offers a wide range of Intel x86 white box appliances for data centers and edge uCPE/vCPE.

    http://www.lannerinc.com/telecom-datacenter-appliances/vcpe/ucpe-platforms/

    [Nov 09, 2018] OpenStack is overkill for Docker

    Notable quotes:
    "... OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users. ..."
    Nov 09, 2018 | www.techrepublic.com

    javascript:void(0)

    Both OpenStack and Docker were conceived to make IT more agile. OpenStack has strived to do this by turning hitherto static IT resources into elastic infrastructure, whereas Docker has reached for this goal by harmonizing development, test, and production resources, as Red Hat's Neil Levine suggests .

    But while Docker adoption has soared, OpenStack is still largely stuck in neutral. OpenStack is kept relevant by so many wanting to believe its promise, but never hitting its stride due to a host of factors , including complexity.

    And yet Docker could be just the thing to turn OpenStack's popularity into productivity. Whether a Docker-plus-OpenStack pairing is right for your enterprise largely depends on the kind of capacity your enterprise hopes to deliver. If simply Docker, OpenStack is probably overkill.

    An open source approach to delivering virtual machines

    OpenStack is an operational model for delivering virtualized compute capacity.

    Sure, some give it a more grandiose definition ("OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds"), but if we ignore secondary services like Cinder, Heat, and Magnum, for example, OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users.

    That's it.

    Not that this is a small thing. After all, without OpenStack, the hypervisor sits idle, lonesome on a single computer, with no way to expose that capacity programmatically (or otherwise) to users.

    Before cloudy systems like OpenStack or Amazon's EC2, users would typically file a help ticket with IT. An IT admin, in turn, would use a GUI or command line to create a VM, and then share the credentials with the user.

    Systems like OpenStack significantly streamline this process, enabling IT to programmatically deliver capacity to users. That's a big deal.

    Docker peanut butter, meet OpenStack jelly

    Docker, the darling of the containers world, is similar to the VM in the IaaS picture painted above.

    A Docker host is really the unit of compute capacity that users need, and not the container itself. Docker addresses what you do with a host once you've got it, but it doesn't really help you get the host in the first place.

    A Docker machine provides a client-side tool that lets you request Docker hosts from an IaaS provider (like EC2 or OpenStack or vSphere), but it's far from a complete solution. In part, this stems from the fact that Docker doesn't have a tenancy model.

    With a hypervisor, each VM is a tenant. But in Docker, the Docker host is a tenant. You typically don't want multiple users sharing a Docker host because then they see each others' containers. So typically an enterprise will layer a cloud system underneath Docker to add tenancy. This yields a stack that looks like: hardware > hypervisor > Docker host > container.

    A common approach today would be to take OpenStack and use it as the enterprise platform to deliver capacity on demand to users. In other words, users rely on OpenStack to request a Docker host, and then they use Docker to run containers in their Docker host.

    So far, so good.

    If all you need is Docker...

    Things get more complicated when we start parsing what capacity needs delivering.

    When an enterprise wants to use Docker, they need to get Docker hosts from a data center. OpenStack can do that, and it can do it alongside delivering all sorts of other capacity to the various teams within the enterprise.

    But if all an enterprise IT team needs is Docker containers delivered, then OpenStack -- or a similar orchestration tool -- may be overkill, as VMware executive Jared Rosoff told me.

    For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them, and then use Docker to create containers in those hosts.

    Google has a vision for something like this with its Google Container Engine . Amazon has something similar in its EC2 Container Service . These are both API's that developers can use to provision some Docker-compatible capacity from their data center.

    As for Docker, the company behind Docker, the technology, it seems to have punted on this problem. focusing instead on what happens on the host itself.

    While we probably don't need to build up a big OpenStack cloud simply to manage Docker instances, it's worth asking what OpenStack should look like if what we wanted to deliver was only Docker hosts, and not VMs.

    Again, we see Google and Amazon tackling the problem, but when will OpenStack, or one of its supporters, do the same? The obvious candidate would be VMware, given its longstanding dominance of tooling around virtualization. But the company that solves this problem first, and in a way that comforts traditional IT with familiar interfaces yet pulls them into a cloudy future, will win, and win big.

    [Nov 09, 2018] What is Hybrid Cloud Computing

    Nov 09, 2018 | www.dummies.com

    The hybrid cloud

    A hybrid cloud is a combination of a private cloud combined with the use of public cloud services where one or several touch points exist between the environments. The goal is to combine services and data from a variety of cloud models to create a unified, automated, and well-managed computing environment.

    Combining public services with private clouds and the data center as a hybrid is the new definition of corporate computing. Not all companies that use some public and some private cloud services have a hybrid cloud. Rather, a hybrid cloud is an environment where the private and public services are used together to create value.

    A cloud is hybrid

    A cloud is not hybrid

    [Nov 09, 2018] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

    Notable quotes:
    "... "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture." ..."
    "... In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing . ..."
    Nov 09, 2018 | solutions.cdw.com

    Enterprises are deploying self-contained micro data centers to power computing at the network edge.

    The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

    "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

    For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

    To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

    "Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

    What Is a Micro Data Center?

    Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

    "From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

    Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

    Standardized Deployments Across the Country

    Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

    "There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

    Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

    In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

    How Micro Data Centers Can Help in Retail, Healthcare

    Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

    In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

    "It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

    In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

    Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

    "The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

    Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

    Micro Data Centers Versus IT Closets

    Think the micro data center is just a glorified update on the traditional IT closet? Think again.

    "There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

    APC identifies three key differences between IT closets and micro data centers:

    1. Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.
    2. Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.
    3. Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

    Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

    [Nov 09, 2018] Solving Office 365 and SaaS Performance Issues with SD-WAN

    Notable quotes:
    "... most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. ..."
    "... Why enterprises overlook the importance of strategically placing cloud gateways ..."
    Nov 09, 2018 | www.brighttalk.com

    About this webinar Major research highlights that most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. To compound the issue, different SaaS applications issue different guidelines for solving performance issues. We will investigate the major reasons for these problems.

    SD-WAN provides an essential set of features that solves these networking issues related to Office 365 and SaaS applications. This session will cover the following major topics:

    [Nov 09, 2018] Make sense of edge computing vs. cloud computing

    Notable quotes:
    "... We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure. ..."
    "... The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening. ..."
    Nov 09, 2018 | www.infoworld.com

    The internet of things is real, and it's a real part of the cloud. A key challenge is how you can get data processed from so many devices. Cisco Systems predicts that cloud traffic is likely to rise nearly fourfold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year for which data is available) to 14.1ZB per year by 2020.

    As a result, we could have the cloud computing perfect storm from the growth of IoT. After all, IoT is about processing device-generated data that is meaningful, and cloud computing is about using data from centralized computing and storage. Growth rates of both can easily become unmanageable.

    So what do we do? The answer is something called "edge computing." We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure.

    That may sound a like a client/server architecture, which also involved figuring out what to do at the client versus at the server. For IoT and any highly distributed applications, you've essentially got a client/network edge/server architecture going on, or -- if your devices can't do any processing themselves, a network edge/server architecture.

    The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening.

    You would still use the cloud for processing that is either not as time-sensitive or is not needed by the device, such as for big data analytics on data from all your devices.

    There's another dimension to this: edge computing and cloud computing are two very different things. One does not replace the other. But too many articles confuse IT pros by suggesting that edge computing will displace cloud computing. It's no more true than saying PCs would displace the datacenter.

    It makes perfect sense to create purpose-built edge computing-based applications, such as an app that places data processing in a sensor to quickly process reactions to alarms. But you're not going to place your inventory-control data and applications at the edge -- moving all compute to the edge would result in a distributed, unsecured, and unmanageable mess.

    All the public cloud providers have IoT strategies and technology stacks that include, or will include, edge computing. Edge and cloud computing can and do work well together, but edge computing is for purpose-built systems with special needs. Cloud computing is a more general-purpose platform that also can work with purpose-built systems in that old client/server model.

    Related:

    David S. Linthicum is a chief cloud strategy officer at Deloitte Consulting, and an internationally recognized industry expert and thought leader. His views are his own.

    [Nov 08, 2018] GT 6.0 GridFTP

    Notable quotes:
    "... GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks ..."
    Nov 08, 2018 | toolkit.globus.org

    The open source Globus® Toolkit is a fundamental enabling technology for the "Grid," letting people share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy. The toolkit includes software services and libraries for resource monitoring, discovery, and management, plus security and file management. In addition to being a central part of science and engineering projects that total nearly a half-billion dollars internationally, the Globus Toolkit is a substrate on which leading IT companies are building significant commercial Grid products.

    The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability. It is packaged as a set of components that can be used either independently or together to develop applications. Every organization has unique modes of operation, and collaboration between multiple organizations is hindered by incompatibility of resources such as data archives, computers, and networks. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when.

    The Globus Toolkit has grown through an open-source strategy similar to the Linux operating system's, and distinct from proprietary attempts at resource-sharing software. This encourages broader, more rapid adoption and leads to greater technical innovation, as the open-source community provides continual enhancements to the product.

    Essential background is contained in the papers " Anatomy of the Grid " by Foster, Kesselman and Tuecke and " Physiology of the Grid " by Foster, Kesselman, Nick and Tuecke.

    Acclaim for the Globus Toolkit

    From version 1.0 in 1998 to the 2.0 release in 2002 and now the latest 4.0 version based on new open-standard Grid services, the Globus Toolkit has evolved rapidly into what The New York Times called "the de facto standard" for Grid computing. In 2002 the project earned a prestigious R&D 100 award, given by R&D Magazine in a ceremony where the Globus Toolkit was named "Most Promising New Technology" among the year's top 100 innovations. Other honors include project leaders Ian Foster of Argonne National Laboratory and the University of Chicago, Carl Kesselman of the University of Southern California's Information Sciences Institute (ISI), and Steve Tuecke of Argonne being named among 2003's top ten innovators by InfoWorld magazine, and a similar honor from MIT Technology Review, which named Globus Toolkit-based Grid computing one of "Ten Technologies That Will Change the World." The Globus Toolkit also GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks . The GridFTP protocol is based on FTP, the highly-popular Internet file transfer protocol. We have selected a set of protocol features and extensions defined already in IETF RFCs and added a few additional features to meet requirements from current data grid projects.

    The following guides are available for this component:

    Data Management Key Concepts For important general concepts [ pdf ].
    Admin Guide For system administrators and those installing, building and deploying GT. You should already have read the Installation Guide and Quickstart [ pdf ]
    User's Guide Describes how end-users typically interact with this component. [ pdf ].
    Developer's Guide Reference and usage scenarios for developers. [ pdf ].
    Other information available for this component are:
    Release Notes What's new with the 6.0 release for this component. [ pdf ]
    Public Interface Guide Information for all public interfaces (including APIs, commands, etc). Please note this is a subset of information in the Developer's Guide [ pdf ].
    Quality Profile Information about test coverage reports, etc. [ pdf ].
    Migrating Guide Information for migrating to this version if you were using a previous version of GT. [ pdf ]
    All GridFTP Guides (PDF only) Includes all GridFTP guides except Public Interfaces (which is a subset of the Developer's Guide)

    [Nov 08, 2018] globus-gridftp-server-control-6.2-1.el7.x86_64.rpm

    Nov 08, 2018 | centos.pkgs.org
    6.2 x86_64 EPEL Testing
    globus-gridftp-server-control - - -
    Requires
    Name Value
    /sbin/ldconfig -
    globus-xio-gsi-driver(x86-64) >= 2
    globus-xio-pipe-driver(x86-64) >= 2
    libc.so.6(GLIBC_2.14)(64bit) -
    libglobus_common.so.0()(64bit) -
    libglobus_common.so.0(GLOBUS_COMMON_14)(64bit) -
    libglobus_gss_assist.so.3()(64bit) -
    libglobus_gssapi_error.so.2()(64bit) -
    libglobus_gssapi_gsi.so.4()(64bit) -
    libglobus_gssapi_gsi.so.4(globus_gssapi_gsi)(64bit) -
    libglobus_openssl_error.so.0()(64bit) -
    libglobus_xio.so.0()(64bit) -
    rtld(GNU_HASH) -
    See Also
    Package Description
    globus-gridftp-server-control-devel-6.1-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Library Development Files
    globus-gridftp-server-devel-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Development Files
    globus-gridftp-server-progs-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Programs
    globus-gridmap-callout-error-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors
    globus-gridmap-callout-error-devel-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors Development Files
    globus-gridmap-callout-error-doc-2.5-1.el7.noarch.rpm Globus Toolkit - Globus Gridmap Callout Errors Documentation Files
    globus-gridmap-eppn-callout-1.13-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap ePPN callout
    globus-gridmap-verify-myproxy-callout-2.9-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap myproxy callout
    globus-gsi-callback-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library
    globus-gsi-callback-devel-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library Development Files
    globus-gsi-callback-doc-5.13-1.el7.noarch.rpm Globus Toolkit - Globus GSI Callback Library Documentation Files
    globus-gsi-cert-utils-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library
    globus-gsi-cert-utils-devel-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library Development Files
    globus-gsi-cert-utils-doc-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Documentation Files
    globus-gsi-cert-utils-progs-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Programs
    Provides
    Name Value
    globus-gridftp-server-control = 6.1-1.el7
    globus-gridftp-server-control(x86-64) = 6.1-1.el7
    libglobus_gridftp_server_control.so.0()(64bit) -
    Required By Download
    Type URL
    Binary Package globus-gridftp-server-control-6.1-1.el7.x86_64.rpm
    Source Package globus-gridftp-server-control-6.1-1.el7.src.rpm
    Install Howto
    1. Download the latest epel-release rpm from
      http://dl.fedoraproject.org/pub/epel/7/x86_64/
      
    2. Install epel-release rpm:
      # rpm -Uvh epel-release*rpm
      
    3. Install globus-gridftp-server-control rpm package:
      # yum install globus-gridftp-server-control
      
    Files
    Path
    /usr/lib64/libglobus_gridftp_server_control.so.0
    /usr/lib64/libglobus_gridftp_server_control.so.0.6.1
    /usr/share/doc/globus-gridftp-server-control-6.1/README
    /usr/share/licenses/globus-gridftp-server-control-6.1/GLOBUS_LICENSE
    Changelog
    2018-04-07 - Mattias Ellert <mattias.ellert@physics.uu.se> - 6.1-1
    - GT6 update: Don't error if acquire_cred fails when vhost env is set
    

    [Nov 08, 2018] 9 Aspera Sync Alternatives Top Best Alternatives

    Nov 08, 2018 | www.topbestalternatives.com

    Aspera Sync is an elite, versatile, multi-directional no concurrent record replication and synchronization. It is intended to conquer the execution and versatility inadequacies of conventional synchronization instruments like Rsync. Aspera Sync can scale up and out for most extreme rate replication and synchronization over WANs. Prominent capacities are The FASP advantage, superior, smart trade for Rsync, underpins complex synchronization arrangements, propelled record taking care of, and so on. Aspera Sync is reason worked by Aspera for elite, versatile, multi-directional offbeat record replication and synchronization. Intended to beat the execution and adaptability deficiencies of conventional synchronization instruments like Rsync, Aspera Sync can scale up and out for greatest pace replication and synchronization over WANs, for now,'s biggest vast information record stores -- from a great many individual documents to the most significant document sizes. Hearty reinforcement and recuperation strategies secure business necessary information and frameworks so undertakings can rapidly recoup necessary documents, structures or a whole site in the occasion if a calamity. Be that as it may, these strategies can be undermined by average exchange speeds amongst essential and reinforcement locales, bringing about fragmented reinforcements and augmented recuperation times. With FASP – controlled transactions, replication fits inside the little operational window so you can meet your recuperation point objective (RPO) and recovery time objective (RTO).

    1. Syncthing Syncthing replaces exclusive synchronize and cloud administrations with something open, reliable and decentralized. Your information is your information alone, and you should pick where it is put away if it is imparted to some outsider and how it's transmitted over the Internet. Syncthing is a record sharing application that permits you to share reports between various gadgets in an advantageous way. Its online Graphical User Interface (GUI) makes it conceivable Website Syncthing Alternatives

    [Nov 08, 2018] Can rsync resume after being interrupted?

    Sep 15, 2012 | unix.stackexchange.com

    Tim , Sep 15, 2012 at 23:36

    I used rsync to copy a large number of files, but my OS (Ubuntu) restarted unexpectedly.

    After reboot, I ran rsync again, but from the output on the terminal, I found that rsync still copied those already copied before. But I heard that rsync is able to find differences between source and destination, and therefore to just copy the differences. So I wonder in my case if rsync can resume what was left last time?

    Gilles , Sep 16, 2012 at 1:56

    Yes, rsync won't copy again files that it's already copied. There are a few edge cases where its detection can fail. Did it copy all the already-copied files? What options did you use? What were the source and target filesystems? If you run rsync again after it's copied everything, does it copy again? – Gilles Sep 16 '12 at 1:56

    Tim , Sep 16, 2012 at 2:30

    @Gilles: Thanks! (1) I think I saw rsync copied the same files again from its output on the terminal. (2) Options are same as in my other post, i.e. sudo rsync -azvv /home/path/folder1/ /home/path/folder2 . (3) Source and target are both NTFS, buy source is an external HDD, and target is an internal HDD. (3) It is now running and hasn't finished yet. – Tim Sep 16 '12 at 2:30

    jwbensley , Sep 16, 2012 at 16:15

    There is also the --partial flag to resume partially transferred files (useful for large files) – jwbensley Sep 16 '12 at 16:15

    Tim , Sep 19, 2012 at 5:20

    @Gilles: What are some "edge cases where its detection can fail"? – Tim Sep 19 '12 at 5:20

    Gilles , Sep 19, 2012 at 9:25

    @Tim Off the top of my head, there's at least clock skew, and differences in time resolution (a common issue with FAT filesystems which store times in 2-second increments, the --modify-window option helps with that). – Gilles Sep 19 '12 at 9:25

    DanielSmedegaardBuus , Nov 1, 2014 at 12:32

    First of all, regarding the "resume" part of your question, --partial just tells the receiving end to keep partially transferred files if the sending end disappears as though they were completely transferred.

    While transferring files, they are temporarily saved as hidden files in their target folders (e.g. .TheFileYouAreSending.lRWzDC ), or a specifically chosen folder if you set the --partial-dir switch. When a transfer fails and --partial is not set, this hidden file will remain in the target folder under this cryptic name, but if --partial is set, the file will be renamed to the actual target file name (in this case, TheFileYouAreSending ), even though the file isn't complete. The point is that you can later complete the transfer by running rsync again with either --append or --append-verify .

    So, --partial doesn't itself resume a failed or cancelled transfer. To resume it, you'll have to use one of the aforementioned flags on the next run. So, if you need to make sure that the target won't ever contain files that appear to be fine but are actually incomplete, you shouldn't use --partial . Conversely, if you want to make sure you never leave behind stray failed files that are hidden in the target directory, and you know you'll be able to complete the transfer later, --partial is there to help you.

    With regards to the --append switch mentioned above, this is the actual "resume" switch, and you can use it whether or not you're also using --partial . Actually, when you're using --append , no temporary files are ever created. Files are written directly to their targets. In this respect, --append gives the same result as --partial on a failed transfer, but without creating those hidden temporary files.

    So, to sum up, if you're moving large files and you want the option to resume a cancelled or failed rsync operation from the exact point that rsync stopped, you need to use the --append or --append-verify switch on the next attempt.

    As @Alex points out below, since version 3.0.0 rsync now has a new option, --append-verify , which behaves like --append did before that switch existed. You probably always want the behaviour of --append-verify , so check your version with rsync --version . If you're on a Mac and not using rsync from homebrew , you'll (at least up to and including El Capitan) have an older version and need to use --append rather than --append-verify . Why they didn't keep the behaviour on --append and instead named the newcomer --append-no-verify is a bit puzzling. Either way, --append on rsync before version 3 is the same as --append-verify on the newer versions.

    --append-verify isn't dangerous: It will always read and compare the data on both ends and not just assume they're equal. It does this using checksums, so it's easy on the network, but it does require reading the shared amount of data on both ends of the wire before it can actually resume the transfer by appending to the target.

    Second of all, you said that you "heard that rsync is able to find differences between source and destination, and therefore to just copy the differences."

    That's correct, and it's called delta transfer, but it's a different thing. To enable this, you add the -c , or --checksum switch. Once this switch is used, rsync will examine files that exist on both ends of the wire. It does this in chunks, compares the checksums on both ends, and if they differ, it transfers just the differing parts of the file. But, as @Jonathan points out below, the comparison is only done when files are of the same size on both ends -- different sizes will cause rsync to upload the entire file, overwriting the target with the same name.

    This requires a bit of computation on both ends initially, but can be extremely efficient at reducing network load if for example you're frequently backing up very large files fixed-size files that often contain minor changes. Examples that come to mind are virtual hard drive image files used in virtual machines or iSCSI targets.

    It is notable that if you use --checksum to transfer a batch of files that are completely new to the target system, rsync will still calculate their checksums on the source system before transferring them. Why I do not know :)

    So, in short:

    If you're often using rsync to just "move stuff from A to B" and want the option to cancel that operation and later resume it, don't use --checksum , but do use --append-verify .

    If you're using rsync to back up stuff often, using --append-verify probably won't do much for you, unless you're in the habit of sending large files that continuously grow in size but are rarely modified once written. As a bonus tip, if you're backing up to storage that supports snapshotting such as btrfs or zfs , adding the --inplace switch will help you reduce snapshot sizes since changed files aren't recreated but rather the changed blocks are written directly over the old ones. This switch is also useful if you want to avoid rsync creating copies of files on the target when only minor changes have occurred.

    When using --append-verify , rsync will behave just like it always does on all files that are the same size. If they differ in modification or other timestamps, it will overwrite the target with the source without scrutinizing those files further. --checksum will compare the contents (checksums) of every file pair of identical name and size.

    UPDATED 2015-09-01 Changed to reflect points made by @Alex (thanks!)

    UPDATED 2017-07-14 Changed to reflect points made by @Jonathan (thanks!)

    Alex , Aug 28, 2015 at 3:49

    According to the documentation --append does not check the data, but --append-verify does. Also, as @gaoithe points out in a comment below, the documentation claims --partial does resume from previous files. – Alex Aug 28 '15 at 3:49

    DanielSmedegaardBuus , Sep 1, 2015 at 13:29

    Thank you @Alex for the updates. Indeed, since 3.0.0, --append no longer compares the source to the target file before appending. Quite important, really! --partial does not itself resume a failed file transfer, but rather leaves it there for a subsequent --append(-verify) to append to it. My answer was clearly misrepresenting this fact; I'll update it to include these points! Thanks a lot :) – DanielSmedegaardBuus Sep 1 '15 at 13:29

    Cees Timmerman , Sep 15, 2015 at 17:21

    This says --partial is enough. – Cees Timmerman Sep 15 '15 at 17:21

    DanielSmedegaardBuus , May 10, 2016 at 19:31

    @CMCDragonkai Actually, check out Alexander's answer below about --partial-dir -- looks like it's the perfect bullet for this. I may have missed something entirely ;) – DanielSmedegaardBuus May 10 '16 at 19:31

    Jonathan Y. , Jun 14, 2017 at 5:48

    What's your level of confidence in the described behavior of --checksum ? According to the man it has more to do with deciding which files to flag for transfer than with delta-transfer (which, presumably, is rsync 's default behavior). – Jonathan Y. Jun 14 '17 at 5:48

    [Nov 08, 2018] How to remove all installed dependent packages while removing a package in centos 7?

    Aug 16, 2016 | unix.stackexchange.com

    ukll , Aug 16, 2016 at 15:26

    I am kinda new to Linux so this may be a dumb question. I searched both in stackoverflow and google but could not find any answer.

    I am using CentOS 7. I installed okular, which is a PDF viewer, with the command:

    sudo yum install okular
    

    As you can see in the picture below, it installed 37 dependent packages to install okular.

    But I wasn't satisfied with the features of the application and I decided to remove it. The problem is that if I remove it with the command:

    sudo yum autoremove okular
    

    It only removes four dependent packages.

    And if I remove it with the command:

    sudo yum remove okular
    

    It removes only one package which is okular.x86_64.

    Now, my question is that is there a way to remove all 37 installed packages with a command or do I have to remove all of them one by one?

    Thank you in advance.

    Jason Powell , Aug 16, 2016 at 17:25

    Personally, I don't like yum plugins because they don't work a lot of the time, in my experience.

    You can use the yum history command to view your yum history.

    [root@testbox ~]# yum history
    Loaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager, verify, versionlock
    ID     | Login user               | Date and time    | Action(s)      | Altered
    ----------------------------------------------------------------------------------
    19 | Jason <jason>  | 2016-06-28 09:16 | Install        |   10
    

    You can find info about the transaction by doing yum history info <transaction id> . So:

    yum history info 19 would tell you all the packages that were installed with transaction 19 and the command line that was used to install the packages. If you want to undo transaction 19, you would run yum history undo 19 .

    Alternatively, if you just wanted to undo the last transaction you did (you installed a software package and didn't like it), you could just do yum history undo last

    Hope this helps!

    ukll , Aug 16, 2016 at 18:34

    Firstly, thank you for your excellent answer. And secondly, when I did sudo yum history , it showed only actions with id 30 through 49. Is there a way to view all actions history (including with id 1-29)? – ukll Aug 16 '16 at 18:34

    Jason Powell , Aug 16, 2016 at 19:00

    You're welcome! Yes, there is a way to show all of your history. Just do yum history list all . – Jason Powell Aug 16 '16 at 19:00

    ,

    yum remove package_name will remove only that package and all their dependencies.

    yum autoremove will remove the unused dependencies

    To remove a package with it's dependencies , you need to install yum plugin called: remove-with-leaves

    To install it type:

    yum install yum-plugin-remove-with-leaves
    

    To remove package_name type:

    yum remove package_name --remove-leaves
    

    [Nov 08, 2018] collectl

    Nov 08, 2018 | collectl.sourceforge.net
    Collectl Get collectl at SourceForge.net. Fast, secure and Free Open Source software downloads
    Latest Version: 4.2.0 June 12, 2017
    To use it download the tarball, unpack it and run ./INSTALL
    Collectl now supports OpenStack Clouds
    Colmux now part of collectl package
    Looking for colplot ? It's now here!

    Remember, to get lustre support contact Peter Piela to get his custom plugin.

    Home | Architecture | Features | Documentation | Releases | FAQ | Support | News | Acknowledgements

    There are a number of times in which you find yourself needing performance data. These can include benchmarking, monitoring a system's general heath or trying to determine what your system was doing at some time in the past. Sometimes you just want to know what the system is doing right now. Depending on what you're doing, you often end up using different tools, each designed to for that specific situation.

    Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interatively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include buddyinfo, cpu, disk, inodes, infiniband, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp.

    The following is an example taken while writing a large file and running the collectl command with no arguments. By default it shows cpu, network and disk stats in brief format . The key point of this format is all output appears on a single line making it much easier to spot spikes or other anomalies in the output:

    collectl
    
    #<--------CPU--------><-----------Disks-----------><-----------Network---------->
    #cpu sys inter  ctxsw KBRead  Reads  KBWrit Writes netKBi pkt-in  netKBo pkt-out
      37  37   382    188      0      0   27144    254     45     68       3      21
      25  25   366    180     20      4   31280    296      0      1       0       0
      25  25   368    183      0      0   31720    275      2     20       0       1
    
    In this example, taken while writing to an NFS mounted filesystem, collectl displays interrupts, memory usage and nfs activity with timestamps. Keep in mind that you can mix and match any data and in the case of brief format you simply need to have a window wide enough to accommodate your output.
    collectl -sjmf -oT
    
    #         <-------Int--------><-----------Memory-----------><------NFS Totals------>
    #Time     Cpu0 Cpu1 Cpu2 Cpu3 Free Buff Cach Inac Slab  Map  Reads Writes Meta Comm
    08:36:52  1001   66    0    0   2G 201M 609M 363M 219M 106M      0      0    5    0
    08:36:53   999 1657    0    0   2G 201M   1G 918M 252M 106M      0  12622    0    2
    08:36:54  1001 7488    0    0   1G 201M   1G   1G 286M 106M      0  20147    0    2
    
    You can also display the same information in verbose format , in which case you get a single line for each type of data at the expense of more screen real estate, as can be seen in this example of network data during NFS writes. Note how you can actually see the network traffic stall while waiting for the server to physically write the data.
    collectl -sn --verbose -oT
    
    # NETWORK SUMMARY (/sec)
    #          KBIn  PktIn SizeIn  MultI   CmpI  ErrIn  KBOut PktOut  SizeO   CmpO ErrOut
    08:46:35   3255  41000     81      0      0      0 112015  78837   1454      0      0
    08:46:36      0      9     70      0      0      0     29     25   1174      0      0
    08:46:37      0      2     70      0      0      0      0      2    134      0      0
    
    In this last example we see what detail format looks like where we see multiple lines of output for a partitular type of data, which in this case is interrupts. We've also elected to show the time in msecs as well.
    collectl -sJ -oTm
    
    #              Int    Cpu0   Cpu1   Cpu2   Cpu3   Type            Device(s)
    08:52:32.002   225       0      4      0      0   IO-APIC-level   ioc0
    08:52:32.002   000    1000      0      0      0   IO-APIC-edge    timer
    08:52:32.002   014       0      0     18      0   IO-APIC-edge    ide0
    08:52:32.002   090       0      0      0  15461   IO-APIC-level   eth1
    
    Collectl output can also be saved in a rolling set of logs for later playback or displayed interactively in a variety of formats. If all that isn't enough there are plugins that allow you to report data in alternate formats or even send them over a socket to remote tools such as ganglia or graphite. You can even create files in space-separated format for plotting with external packages like gnuplot. The one below was created with colplot, part of the collectl utilities project, which provides a web-based interface to gnuplot.

    Are you a big user of the top command? Have you ever wanted to look across a cluster to see what the top processes are? Better yet, how about using iostat across a cluster? Or maybe vmstat or even looking at top network interfaces across a cluster? Look no more because if collectl reports it for one node, colmux can do it across a cluster AND you can sort by any column of your choice by simply using the right/left arrow keys.

    Collectl and Colmux run on all linux distros and are available in redhat and debian respositories and so getting it may be as simple as running yum or apt-get. Note that since colmux has just been merged into the collectl V4.0.0 package it may not yet be available in the repository of your choice and you should install collectl-utils V4.8.2 or earlier to get it for the time being.

    Collectl requires perl which is usually installed by default on all major Linux distros and optionally uses Time::Hires which is also usually installed and allows collectl to use fractional intervals and display timestamps in msec. The Compress::Zlib module is usually installed as well and if present the recorded data will be compressed and therefore use on average 90% less storage when recording to a file.

    If you're still not sure if collectl is right for you, take a couple of minutes to look at the Collectl Tutorial to get a better feel for what collectl can do. Also be sure to check back and see what's new on the website, sign up for a Mailing List or watch the Forums .

    "I absolutely love it and have been using it extensively for months."
    Kevin Closson: Performance Architect, EMC
    "Collectl is indispensable to any system admin."
    Matt Heaton: President, Bluehost.com

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Nov 08, 2018 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] Determining what process is bound to a port

    Mar 14, 2011 | unix.stackexchange.com
    I know that using the command:
    lsof -i TCP

    (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

    Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

    Cakemox , Mar 14, 2011 at 20:48

    netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
    # netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
    tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
    tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
    tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named
    

    etc.

    xxx , Mar 14, 2011 at 21:01

    Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

    Rich Homolka , Mar 15, 2011 at 19:56

    netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

    Jonathan , Aug 26, 2014 at 18:50

    As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

    sudo , May 25, 2017 at 2:24

    What about for SCTP? – sudo May 25 '17 at 2:24

    frielp , Mar 15, 2011 at 13:33

    On AIX, netstat & rmsock can be used to determine process binding:
    [root@aix] netstat -Ana|grep LISTEN|grep 80
    f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
    f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
    f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
    f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN
    
    # Port 80 maps to f1000700025de3b0 above, so we type:
    [root@aix] rmsock f1000700025de3b0 tcpcb
    The socket 0x25de008 is being held by process 499790 (java).
    

    Olivier Dulac , Sep 18, 2013 at 4:05

    Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

    Vitor Py , Sep 26, 2013 at 14:18

    @OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

    Olivier Dulac , Sep 26, 2013 at 16:00

    @vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

    frielp , Mar 15, 2011 at 13:27

    Another tool available on Linux is ss . From the ss man page on Fedora:
    NAME
           ss - another utility to investigate sockets
    SYNOPSIS
           ss [options] [ FILTER ]
    DESCRIPTION
           ss is used to dump socket statistics. It allows showing information 
           similar to netstat. It can display more TCP and state informations  
           than other tools.
    

    Example output below - the final column shows the process binding:

    [root@box] ss -ap
    State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
    LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
    LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
    LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
    LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
    LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
    LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
    LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))
    

    Eugen Constantin Dinca , Mar 14, 2011 at 23:47

    For Solaris you can use pfiles and then grep by sockname: or port: .

    A sample (from here ):

    pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'
    

    rickumali , May 8, 2011 at 14:40

    I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

    One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

    Gilles , Oct 8, 2015 at 21:04

    In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Jul 27, 2012 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux-Unix-BSD servers

    Looks like -h option is slightly more convenient then -w option.
    Notable quotes:
    "... Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with "#". ..."
    Nov 08, 2018 | www.cyberciti.biz

    First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple.

    Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with "#".

    Here is my sample file named ~/.pssh_hosts_files:
    $ cat ~/.pssh_hosts_files
    vivek@dellm6700
    root@192.168.2.30
    root@192.168.2.45
    root@192.168.2.46

    Run the date command all hosts:
    $ pssh -i -h ~/.pssh_hosts_files date
    Sample outputs:

    [1] 18:10:10 [SUCCESS] root@192.168.2.46
    Sun Feb 26 18:10:10 IST 2017
    [2] 18:10:10 [SUCCESS] vivek@dellm6700
    Sun Feb 26 18:10:10 IST 2017
    [3] 18:10:10 [SUCCESS] root@192.168.2.45
    Sun Feb 26 18:10:10 IST 2017
    [4] 18:10:10 [SUCCESS] root@192.168.2.30
    Sun Feb 26 18:10:10 IST 2017
    

    Run the uptime command on each host:
    $ pssh -i -h ~/.pssh_hosts_files uptime
    Sample outputs:

    [1] 18:11:15 [SUCCESS] root@192.168.2.45
     18:11:15 up  2:29,  0 users,  load average: 0.00, 0.00, 0.00
    [2] 18:11:15 [SUCCESS] vivek@dellm6700
     18:11:15 up 19:06,  0 users,  load average: 0.13, 0.25, 0.27
    [3] 18:11:15 [SUCCESS] root@192.168.2.46
     18:11:15 up  1:55,  0 users,  load average: 0.00, 0.00, 0.00
    [4] 18:11:15 [SUCCESS] root@192.168.2.30
     6:11PM  up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
    

    You can now automate common sysadmin tasks such as patching all servers:
    $ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
    OR
    $ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
    $ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade

    How do I use pssh to copy file to all servers?

    The syntax is:
    pscp -h ~/.pssh_hosts_files src dest
    To copy $HOME/demo.txt to /tmp/ on all servers, enter:
    $ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
    Sample outputs:

    [1] 18:17:35 [SUCCESS] vivek@dellm6700
    [2] 18:17:35 [SUCCESS] root@192.168.2.45
    [3] 18:17:35 [SUCCESS] root@192.168.2.46
    [4] 18:17:35 [SUCCESS] root@192.168.2.30
    

    Or use the prsync command for efficient copying of files:
    $ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
    $ prsync -h ~/.pssh_hosts_files *.html /var/www/html/

    How do I kill processes in parallel on a number of hosts?

    Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
    $ pnuke -h .pssh_hosts_files process_name
    ### kill nginx and firefox on hosts:
    $ pnuke -h ~/.pssh_hosts_files firefox
    $ pnuke -h ~/.pssh_hosts_files nginx

    See pssh/pscp command man pages for more information.

    [Nov 08, 2018] Parallel command execution with PDSH

    Notable quotes:
    "... (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) ) ..."
    "... open an editor and paste the following lines into it and save the file as /foo/bar ..."
    Nov 08, 2018 | www.rittmanmead.com

    In this series of blog posts I'm taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or just a plain simple set of 'normal' machines on which you want to run the same commands and monitoring.

    Previously we looked at using SSH keys for intra-machine authorisation , which is a pre-requisite what we'll look at here -- executing the same command across multiple machines using PDSH. In the next post of the series we'll see how we can monitor OS metrics across a cluster with colmux.

    PDSH is a very smart little tool that enables you to issue the same command on multiple hosts at once, and see the output. You need to have set up ssh key authentication from the client to host on all of them, so if you followed the steps in the first section of this article you'll be good to go.

    The syntax for using it is nice and simple:

    For example run against a small cluster of four machines that I have:

    robin@RNMMBP $ pdsh -w root@rnmcluster02-node0[1-4] date
    
    rnmcluster02-node01: Fri Nov 28 17:26:17 GMT 2014  
    rnmcluster02-node02: Fri Nov 28 17:26:18 GMT 2014  
    rnmcluster02-node03: Fri Nov 28 17:26:18 GMT 2014  
    rnmcluster02-node04: Fri Nov 28 17:26:18 GMT 2014
    

    PDSH can be installed on the Mac under Homebrew (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) )

    brew install pdsh
    

    And if you want to run it on Linux from the EPEL yum repository (RHEL-compatible, but packages for other distros are available):

    yum install pdsh
    

    You can run it from a cluster node, or from your client machine (assuming your client machine is mac/linux).

    Example - install and start collectl on all nodes

    I started looking into pdsh when it came to setting up a cluster of machines from scratch. One of the must-have tools I like to have on any machine that I work with is the excellent collectl . This is an OS resource monitoring tool that I initially learnt of through Kevin Closson and Greg Rahn , and provides the kind of information you'd get from top etc – and then some! It can run interactively, log to disk, run as a service – and it also happens to integrate very nicely with graphite , making it a no-brainer choice for any server.

    So, instead of logging into each box individually I could instead run this:

    pdsh -w root@rnmcluster02-node0[1-4] yum install -y collectl  
    pdsh -w root@rnmcluster02-node0[1-4] service collectl start  
    pdsh -w root@rnmcluster02-node0[1-4] chkconfig collectl on
    

    Yes, I know there are tools out there like puppet and chef that are designed for doing this kind of templated build of multiple servers, but the point I want to illustrate here is that pdsh enables you to do ad-hoc changes to a set of servers at once. Sure, once I have my cluster built and want to create an image/template for future builds, then it would be daft if I were building the whole lot through pdsh-distributed yum commands.

    Example - setting up the date/timezone/NTPD

    Often the accuracy of the clock on each server in a cluster is crucial, and we can easily do this with pdsh:

    Install packages

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] yum install -y ntp ntpdate
    

    Set the timezone:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime
    

    Force a time refresh:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ntpdate pool.ntp.org  
    rnmcluster02-node03: 30 Nov 20:46:22 ntpdate[27610]: step time server 176.58.109.199 offset -2.928585 sec  
    rnmcluster02-node02: 30 Nov 20:46:22 ntpdate[28527]: step time server 176.58.109.199 offset -2.946021 sec  
    rnmcluster02-node04: 30 Nov 20:46:22 ntpdate[27615]: step time server 129.250.35.250 offset -2.915713 sec  
    rnmcluster02-node01: 30 Nov 20:46:25 ntpdate[29316]: 178.79.160.57 rate limit response from server.  
    rnmcluster02-node01: 30 Nov 20:46:22 ntpdate[29316]: step time server 176.58.109.199 offset -2.925016 sec
    

    Set NTPD to start automatically at boot:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] chkconfig ntpd on
    

    Start NTPD:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] service ntpd start
    
    Example - using a HEREDOC (here-document) and sending quotation marks in a command with PDSH

    Here documents (heredocs) are a nice way to embed multi-line content in a single command, enabling the scripting of a file creation rather than the clumsy instruction to " open an editor and paste the following lines into it and save the file as /foo/bar ".

    Fortunately heredocs work just fine with pdsh, so long as you remember to enclose the whole command in quotation marks. And speaking of which, if you need to include quotation marks in your actual command, you need to escape them with a backslash. Here's an example of both, setting up the configuration file for my ever-favourite gnu screen on all the nodes of the cluster:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "cat > ~/.screenrc <<EOF  
    hardstatus alwayslastline \"%{= RY}%H %{kG}%{G} Screen(s): %{c}%w %=%{kG}%c  %D, %M %d %Y  LD:%l\"  
    startup_message off  
    msgwait 1  
    defscrollback 100000  
    nethack on  
    EOF  
    "
    

    Now when I login to each individual node and run screen, I get a nice toolbar at the bottom:

    Combining commands

    To combine commands together that you send to each host you can use the standard bash operator semicolon ;

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "date;sleep 5;date"  
    rnmcluster02-node01: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node03: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node01: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node03: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:11 GMT 2014
    

    Note the use of the quotation marks to enclose the entire command string. Without them the bash interpretor will take the ; as the delineator of the local commands, and try to run the subsequent commands locally:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] date;sleep 5;date  
    rnmcluster02-node03: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node01: Sun Nov 30 20:57:53 GMT 2014  
    Sun 30 Nov 2014 20:58:00 GMT
    

    You can also use && and || to run subsequent commands conditionally if the previous one succeeds or fails respectively:

    robin@RNMMBP $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig collectl on && service collectl start"
    
    rnmcluster02-node03: Starting collectl: [  OK  ]  
    rnmcluster02-node02: Starting collectl: [  OK  ]  
    rnmcluster02-node04: Starting collectl: [  OK  ]  
    rnmcluster02-node01: Starting collectl: [  OK  ]
    
    Piping and file redirects

    Similar to combining commands above, you can pipe the output of commands, and you need to use quotation marks to enclose the whole command string.

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig|grep collectl"  
    rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
    

    However, you can pipe the output from pdsh to a local process if you want:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig|grep collectl  
    rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
    

    The difference is that you'll be shifting the whole of the pipe across the network in order to process it locally, so if you're just grepping etc this doesn't make any sense. For use of utilities held locally and not on the remote server though, this might make sense.

    File redirects work the same way – within quotation marks and the redirect will be to a file on the remote server, outside of them it'll be local:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig>/tmp/pdsh.out"  
    robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
    ls: /tmp/pdsh.out: No such file or directory
    
    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig>/tmp/pdsh.out  
    robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
    -rw-r--r--  1 robin  wheel  7608 30 Nov 19:23 /tmp/pdsh.out
    
    Cancelling PDSH operations

    As you can see from above, the precise syntax of pdsh calls can be hugely important. If you run a command and it appears 'stuck', or if you have that heartstopping realisation that the shutdown -h now you meant to run locally you ran across the cluster, you can press Ctrl-C once to see the status of your commands:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    pdsh@RNMMBP: rnmcluster02-node03: command in progress  
    pdsh@RNMMBP: rnmcluster02-node04: command in progress
    

    and press it twice (or within a second of the first) to cancel:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    pdsh@RNMMBP: rnmcluster02-node03: command in progress  
    pdsh@RNMMBP: rnmcluster02-node04: command in progress  
    ^Csending SIGTERM to ssh rnmcluster02-node01
    sending signal 15 to rnmcluster02-node01 [ssh] pid 26534  
    sending SIGTERM to ssh rnmcluster02-node02  
    sending signal 15 to rnmcluster02-node02 [ssh] pid 26535  
    sending SIGTERM to ssh rnmcluster02-node03  
    sending signal 15 to rnmcluster02-node03 [ssh] pid 26533  
    sending SIGTERM to ssh rnmcluster02-node04  
    sending signal 15 to rnmcluster02-node04 [ssh] pid 26532  
    pdsh@RNMMBP: interrupt, aborting.
    

    If you've got threads yet to run on the remote hosts, but want to keep running whatever has already started, you can use Ctrl-C, Ctrl-Z:

    robin@RNMMBP ~ $ pdsh -f 2 -w root@rnmcluster02-node[01-4] "sleep 5;date"  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    ^Zpdsh@RNMMBP: Canceled 2 pending threads.
    rnmcluster02-node01: Mon Dec  1 21:46:35 GMT 2014  
    rnmcluster02-node02: Mon Dec  1 21:46:35 GMT 2014
    

    NB the above example illustrates the use of the -f argument to limit how many threads are run against remote hosts at once. We can see the command is left running on the first two nodes and returns the date, whilst the Ctrl-C - Ctrl-Z stops it from being executed on the remaining nodes.

    PDSH_SSH_ARGS_APPEND

    By default, when you ssh to new host for the first time you'll be prompted to validate the remote host's SSH key fingerprint.

    The authenticity of host 'rnmcluster02-node02 (172.28.128.9)' can't be established.  
    RSA key fingerprint is 00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
    Are you sure you want to continue connecting (yes/no)?
    

    This is one of those prompts that the majority of us just hit enter at and ignore; if that includes you then you will want to make sure that your PDSH call doesn't fall in a heap because you're connecting to a bunch of new servers all at once. PDSH is not an interactive tool, so if it requires input from the hosts it's connecting to it'll just fail. To avoid this SSH prompt, you can set up the environment variable PDSH SSH ARGS_APPEND as follows:

    export PDSH_SSH_ARGS_APPEND="-q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
    

    The -q makes failures less verbose, and the -o passes in a couple of options, StrictHostKeyChecking to disable the above check, and UserKnownHostsFile to stop SSH keeping a list of host IP/hostnames and corresponding SSH fingerprints (by pointing it at /dev/null ). You'll want this if you're working with VMs that are sharing a pool of IPs and get re-used, otherwise you get this scary failure:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!  
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!  
    It is also possible that a host key has just been changed.  
    The fingerprint for the RSA key sent by the remote host is  
    00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
    Please contact your system administrator.
    

    For both of these above options, make sure you're aware of the security implications that you're opening yourself up to. For a sandbox environment I just ignore them; for anything where security is of importance make sure you are aware of quite which server you are connecting to by SSH, and protecting yourself from MitM attacks.

    PDSH Reference

    You can find out more about PDSH at https://code.google.com/p/pdsh/wiki/UsingPDSH

    Summary

    When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

    After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It's a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.

    In the next article of this series we'll see how the tool colmux is a powerful way to monitor OS metrics across a cluster.

    So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff .

    [Nov 08, 2018] pexec utility is similar to parallel

    Nov 08, 2018 | www.gnu.org

    Welcome to the web page of the pexec program!

    The main purpose of the program pexec is to execute the given command or shell script (e.g. parsed by /bin/sh ) in parallel on the local host or on remote hosts, while some of the execution parameters, namely the redirected standard input, output or error and environmental variables can be varied. This program is therefore capable to replace the classic shell loop iterators (e.g. for ~ in ~ done , in bash ) by executing the body of the loop in parallel. Thus, the program pexec implements shell level data parallelism in a barely simple form. The capabilities of the program is extended with additional features, such as allowing to define mutual exclusions, do atomic command executions and implement higher level resource and job control. See the complete manual for more details. See a brief Hungarian description of the program here .

    The actual version of the program package is 1.0rc8 .

    You may browse the package directory here (for FTP access, see this directory ). See the GNU summary page of this project here . The latest version of the program source package is pexec-1.0rc8.tar.gz . Here is another mirror of the package directory.

    Please consider making donations to the author (via PayPal ) in order to help further development of the program or support the GNU project via the FSF .

    [Nov 08, 2018] How to split one string into multiple variables in bash shell? [duplicate]

    Nov 08, 2018 | stackoverflow.com
    This question already has an answer here:

    Rob I , May 9, 2012 at 19:22

    For your second question, see @mkb's comment to my answer below - that's definitely the way to go! – Rob I May 9 '12 at 19:22

    Dennis Williamson , Jul 4, 2012 at 16:14

    See my edited answer for one way to read individual characters into an array. – Dennis Williamson Jul 4 '12 at 16:14

    Nick Weedon , Dec 31, 2015 at 11:04

    Here is the same thing in a more concise form: var1=$(cut -f1 -d- <<<$STR) – Nick Weedon Dec 31 '15 at 11:04

    Rob I , May 9, 2012 at 17:00

    If your solution doesn't have to be general, i.e. only needs to work for strings like your example, you could do:
    var1=$(echo $STR | cut -f1 -d-)
    var2=$(echo $STR | cut -f2 -d-)
    

    I chose cut here because you could simply extend the code for a few more variables...

    crunchybutternut , May 9, 2012 at 17:40

    Can you look at my post again and see if you have a solution for the followup question? thanks! – crunchybutternut May 9 '12 at 17:40

    mkb , May 9, 2012 at 17:59

    You can use cut to cut characters too! cut -c1 for example. – mkb May 9 '12 at 17:59

    FSp , Nov 27, 2012 at 10:26

    Although this is very simple to read and write, is a very slow solution because forces you to read twice the same data ($STR) ... if you care of your script performace, the @anubhava solution is much better – FSp Nov 27 '12 at 10:26

    tripleee , Jan 25, 2016 at 6:47

    Apart from being an ugly last-resort solution, this has a bug: You should absolutely use double quotes in echo "$STR" unless you specifically want the shell to expand any wildcards in the string as a side effect. See also stackoverflow.com/questions/10067266/tripleee Jan 25 '16 at 6:47

    Rob I , Feb 10, 2016 at 13:57

    You're right about double quotes of course, though I did point out this solution wasn't general. However I think your assessment is a bit unfair - for some people this solution may be more readable (and hence extensible etc) than some others, and doesn't completely rely on arcane bash feature that wouldn't translate to other shells. I suspect that's why my solution, though less elegant, continues to get votes periodically... – Rob I Feb 10 '16 at 13:57

    Dennis Williamson , May 10, 2012 at 3:14

    read with IFS are perfect for this:
    $ IFS=- read var1 var2 <<< ABCDE-123456
    $ echo "$var1"
    ABCDE
    $ echo "$var2"
    123456
    

    Edit:

    Here is how you can read each individual character into array elements:

    $ read -a foo <<<"$(echo "ABCDE-123456" | sed 's/./& /g')"
    

    Dump the array:

    $ declare -p foo
    declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="-" [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'
    

    If there are spaces in the string:

    $ IFS=$'\v' read -a foo <<<"$(echo "ABCDE 123456" | sed 's/./&\v/g')"
    $ declare -p foo
    declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]=" " [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'
    

    insecure , Apr 30, 2014 at 7:51

    Great, the elegant bash-only way, without unnecessary forks. – insecure Apr 30 '14 at 7:51

    Martin Serrano , Jan 11 at 4:34

    this solution also has the benefit that if delimiter is not present, the var2 will be empty – Martin Serrano Jan 11 at 4:34

    mkb , May 9, 2012 at 17:02

    If you know it's going to be just two fields, you can skip the extra subprocesses like this:
    var1=${STR%-*}
    var2=${STR#*-}
    

    What does this do? ${STR%-*} deletes the shortest substring of $STR that matches the pattern -* starting from the end of the string. ${STR#*-} does the same, but with the *- pattern and starting from the beginning of the string. They each have counterparts %% and ## which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.

    Jens , Jan 30, 2015 at 15:17

    Plus 1 For knowing your POSIX shell features, avoiding expensive forks and pipes, and the absence of bashisms. – Jens Jan 30 '15 at 15:17

    Steven Lu , May 1, 2015 at 20:19

    Dunno about "absence of bashisms" considering that this is already moderately cryptic .... if your delimiter is a newline instead of a hyphen, then it becomes even more cryptic. On the other hand, it works with newlines , so there's that. – Steven Lu May 1 '15 at 20:19

    mkb , Mar 9, 2016 at 17:30

    @KErlandsson: done – mkb Mar 9 '16 at 17:30

    mombip , Aug 9, 2016 at 15:58

    I've finally found documentation for it: Shell-Parameter-Expansionmombip Aug 9 '16 at 15:58

    DS. , Jan 13, 2017 at 19:56

    Mnemonic: "#" is to the left of "%" on a standard keyboard, so "#" removes a prefix (on the left), and "%" removes a suffix (on the right). – DS. Jan 13 '17 at 19:56

    tripleee , May 9, 2012 at 17:57

    Sounds like a job for set with a custom IFS .
    IFS=-
    set $STR
    var1=$1
    var2=$2
    

    (You will want to do this in a function with a local IFS so you don't mess up other parts of your script where you require IFS to be what you expect.)

    Rob I , May 9, 2012 at 19:20

    Nice - I knew about $IFS but hadn't seen how it could be used. – Rob I May 9 '12 at 19:20

    Sigg3.net , Jun 19, 2013 at 8:08

    I used triplee's example and it worked exactly as advertised! Just change last two lines to <pre> myvar1= echo $1 && myvar2= echo $2 </pre> if you need to store them throughout a script with several "thrown" variables. – Sigg3.net Jun 19 '13 at 8:08

    tripleee , Jun 19, 2013 at 13:25

    No, don't use a useless echo in backticks . – tripleee Jun 19 '13 at 13:25

    Daniel Andersson , Mar 27, 2015 at 6:46

    This is a really sweet solution if we need to write something that is not Bash specific. To handle IFS troubles, one can add OLDIFS=$IFS at the beginning before overwriting it, and then add IFS=$OLDIFS just after the set line. – Daniel Andersson Mar 27 '15 at 6:46

    tripleee , Mar 27, 2015 at 6:58

    FWIW the link above is broken. I was lazy and careless. The canonical location still works; iki.fi/era/unix/award.html#echotripleee Mar 27 '15 at 6:58

    anubhava , May 9, 2012 at 17:09

    Using bash regex capabilities:
    re="^([^-]+)-(.*)$"
    [[ "ABCDE-123456" =~ $re ]] && var1="${BASH_REMATCH[1]}" && var2="${BASH_REMATCH[2]}"
    echo $var1
    echo $var2
    

    OUTPUT

    ABCDE
    123456
    

    Cometsong , Oct 21, 2016 at 13:29

    Love pre-defining the re for later use(s)! – Cometsong Oct 21 '16 at 13:29

    Archibald , Nov 12, 2012 at 11:03

    string="ABCDE-123456"
    IFS=- # use "local IFS=-" inside the function
    set $string
    echo $1 # >>> ABCDE
    echo $2 # >>> 123456
    

    tripleee , Mar 27, 2015 at 7:02

    Hmmm, isn't this just a restatement of my answer ? – tripleee Mar 27 '15 at 7:02

    Archibald , Sep 18, 2015 at 12:36

    Actually yes. I just clarified it a bit. – Archibald Sep 18 '15 at 12:36

    [Nov 08, 2018] How to split a string in shell and get the last field

    Nov 08, 2018 | stackoverflow.com

    cd1 , Jul 1, 2010 at 23:29

    Suppose I have the string 1:2:3:4:5 and I want to get its last field ( 5 in this case). How do I do that using Bash? I tried cut , but I don't know how to specify the last field with -f .

    Stephen , Jul 2, 2010 at 0:05

    You can use string operators :
    $ foo=1:2:3:4:5
    $ echo ${foo##*:}
    5
    

    This trims everything from the front until a ':', greedily.

    ${foo  <-- from variable foo
      ##   <-- greedy front trim
      *    <-- matches anything
      :    <-- until the last ':'
     }
    

    eckes , Jan 23, 2013 at 15:23

    While this is working for the given problem, the answer of William below ( stackoverflow.com/a/3163857/520162 ) also returns 5 if the string is 1:2:3:4:5: (while using the string operators yields an empty result). This is especially handy when parsing paths that could contain (or not) a finishing / character. – eckes Jan 23 '13 at 15:23

    Dobz , Jun 25, 2014 at 11:44

    How would you then do the opposite of this? to echo out '1:2:3:4:'? – Dobz Jun 25 '14 at 11:44

    Mihai Danila , Jul 9, 2014 at 14:07

    And how does one keep the part before the last separator? Apparently by using ${foo%:*} . # - from beginning; % - from end. # , % - shortest match; ## , %% - longest match. – Mihai Danila Jul 9 '14 at 14:07

    Putnik , Feb 11, 2016 at 22:33

    If i want to get the last element from path, how should I use it? echo ${pwd##*/} does not work. – Putnik Feb 11 '16 at 22:33

    Stan Strum , Dec 17, 2017 at 4:22

    @Putnik that command sees pwd as a variable. Try dir=$(pwd); echo ${dir##*/} . Works for me! – Stan Strum Dec 17 '17 at 4:22

    a3nm , Feb 3, 2012 at 8:39

    Another way is to reverse before and after cut :
    $ echo ab:cd:ef | rev | cut -d: -f1 | rev
    ef
    

    This makes it very easy to get the last but one field, or any range of fields numbered from the end.

    Dannid , Jan 14, 2013 at 20:50

    This answer is nice because it uses 'cut', which the author is (presumably) already familiar. Plus, I like this answer because I am using 'cut' and had this exact question, hence finding this thread via search. – Dannid Jan 14 '13 at 20:50

    funroll , Aug 12, 2013 at 19:51

    Some cut-and-paste fodder for people using spaces as delimiters: echo "1 2 3 4" | rev | cut -d " " -f1 | revfunroll Aug 12 '13 at 19:51

    EdgeCaseBerg , Sep 8, 2013 at 5:01

    the rev | cut -d -f1 | rev is so clever! Thanks! Helped me a bunch (my use case was rev | -d ' ' -f 2- | rev – EdgeCaseBerg Sep 8 '13 at 5:01

    Anarcho-Chossid , Sep 16, 2015 at 15:54

    Wow. Beautiful and dark magic. – Anarcho-Chossid Sep 16 '15 at 15:54

    shearn89 , Aug 17, 2017 at 9:27

    I always forget about rev , was just what I needed! cut -b20- | rev | cut -b10- | revshearn89 Aug 17 '17 at 9:27

    William Pursell , Jul 2, 2010 at 7:09

    It's difficult to get the last field using cut, but here's (one set of) solutions in awk and perl
    $ echo 1:2:3:4:5 | awk -F: '{print $NF}'
    5
    $ echo 1:2:3:4:5 | perl -F: -wane 'print $F[-1]'
    5
    

    eckes , Jan 23, 2013 at 15:20

    great advantage of this solution over the accepted answer: it also matches paths that contain or do not contain a finishing / character: /a/b/c/d and /a/b/c/d/ yield the same result ( d ) when processing pwd | awk -F/ '{print $NF}' . The accepted answer results in an empty result in the case of /a/b/c/d/eckes Jan 23 '13 at 15:20

    stamster , May 21 at 11:52

    @eckes In case of AWK solution, on GNU bash, version 4.3.48(1)-release that's not true, as it matters whenever you have trailing slash or not. Simply put AWK will use / as delimiter, and if your path is /my/path/dir/ it will use value after last delimiter, which is simply an empty string. So it's best to avoid trailing slash if you need to do such a thing like I do. – stamster May 21 at 11:52

    Nicholas M T Elliott , Jul 1, 2010 at 23:39

    Assuming fairly simple usage (no escaping of the delimiter, for example), you can use grep:
    $ echo "1:2:3:4:5" | grep -oE "[^:]+$"
    5
    

    Breakdown - find all the characters not the delimiter ([^:]) at the end of the line ($). -o only prints the matching part.

    Dennis Williamson , Jul 2, 2010 at 0:05

    One way:
    var1="1:2:3:4:5"
    var2=${var1##*:}
    

    Another, using an array:

    var1="1:2:3:4:5"
    saveIFS=$IFS
    IFS=":"
    var2=($var1)
    IFS=$saveIFS
    var2=${var2[@]: -1}
    

    Yet another with an array:

    var1="1:2:3:4:5"
    saveIFS=$IFS
    IFS=":"
    var2=($var1)
    IFS=$saveIFS
    count=${#var2[@]}
    var2=${var2[$count-1]}
    

    Using Bash (version >= 3.2) regular expressions:

    var1="1:2:3:4:5"
    [[ $var1 =~ :([^:]*)$ ]]
    var2=${BASH_REMATCH[1]}
    

    liuyang1 , Mar 24, 2015 at 6:02

    Thanks so much for array style, as I need this feature, but not have cut, awk these utils. – liuyang1 Mar 24 '15 at 6:02

    user3133260 , Dec 24, 2013 at 19:04

    $ echo "a b c d e" | tr ' ' '\n' | tail -1
    e
    

    Simply translate the delimiter into a newline and choose the last entry with tail -1 .

    Yajo , Jul 30, 2014 at 10:13

    It will fail if the last item contains a \n , but for most cases is the most readable solution. – Yajo Jul 30 '14 at 10:13

    Rafael , Nov 10, 2016 at 10:09

    Using sed :
    $ echo '1:2:3:4:5' | sed 's/.*://' # => 5
    
    $ echo '' | sed 's/.*://' # => (empty)
    
    $ echo ':' | sed 's/.*://' # => (empty)
    $ echo ':b' | sed 's/.*://' # => b
    $ echo '::c' | sed 's/.*://' # => c
    
    $ echo 'a' | sed 's/.*://' # => a
    $ echo 'a:' | sed 's/.*://' # => (empty)
    $ echo 'a:b' | sed 's/.*://' # => b
    $ echo 'a::c' | sed 's/.*://' # => c
    

    Ab Irato , Nov 13, 2013 at 16:10

    If your last field is a single character, you could do this:
    a="1:2:3:4:5"
    
    echo ${a: -1}
    echo ${a:(-1)}
    

    Check string manipulation in bash .

    gniourf_gniourf , Nov 13, 2013 at 16:15

    This doesn't work: it gives the last character of a , not the last field . – gniourf_gniourf Nov 13 '13 at 16:15

    Ab Irato , Nov 25, 2013 at 13:25

    True, that's the idea, if you know the length of the last field it's good. If not you have to use something else... – Ab Irato Nov 25 '13 at 13:25

    sphakka , Jan 25, 2016 at 16:24

    Interesting, I didn't know of these particular Bash string manipulations. It also resembles to Python's string/array slicing . – sphakka Jan 25 '16 at 16:24

    ghostdog74 , Jul 2, 2010 at 1:16

    Using Bash.
    $ var1="1:2:3:4:0"
    $ IFS=":"
    $ set -- $var1
    $ eval echo  \$${#}
    0
    

    Sopalajo de Arrierez , Dec 24, 2014 at 5:04

    I would buy some details about this method, please :-) . – Sopalajo de Arrierez Dec 24 '14 at 5:04

    Rafa , Apr 27, 2017 at 22:10

    Could have used echo ${!#} instead of eval echo \$${#} . – Rafa Apr 27 '17 at 22:10

    Crytis , Dec 7, 2016 at 6:51

    echo "a:b:c:d:e"|xargs -d : -n1|tail -1
    

    First use xargs split it using ":",-n1 means every line only have one part.Then,pring the last part.

    BDL , Dec 7, 2016 at 13:47

    Although this might solve the problem, one should always add an explanation to it. – BDL Dec 7 '16 at 13:47

    Crytis , Jun 7, 2017 at 9:13

    already added.. – Crytis Jun 7 '17 at 9:13

    021 , Apr 26, 2016 at 11:33

    There are many good answers here, but still I want to share this one using basename :
     basename $(echo "a:b:c:d:e" | tr ':' '/')
    

    However it will fail if there are already some '/' in your string . If slash / is your delimiter then you just have to (and should) use basename.

    It's not the best answer but it just shows how you can be creative using bash commands.

    Nahid Akbar , Jun 22, 2012 at 2:55

    for x in `echo $str | tr ";" "\n"`; do echo $x; done
    

    chepner , Jun 22, 2012 at 12:58

    This runs into problems if there is whitespace in any of the fields. Also, it does not directly address the question of retrieving the last field. – chepner Jun 22 '12 at 12:58

    Christoph Böddeker , Feb 19 at 15:50

    For those that comfortable with Python, https://github.com/Russell91/pythonpy is a nice choice to solve this problem.
    $ echo "a:b:c:d:e" | py -x 'x.split(":")[-1]'
    

    From the pythonpy help: -x treat each row of stdin as x .

    With that tool, it is easy to write python code that gets applied to the input.

    baz , Nov 24, 2017 at 19:27

    a solution using the read builtin
    IFS=':' read -a field <<< "1:2:3:4:5"
    echo ${field[4]}
    

    [Nov 08, 2018] How do I split a string on a delimiter in Bash?

    Notable quotes:
    "... Bash shell script split array ..."
    "... associative array ..."
    "... pattern substitution ..."
    "... Debian GNU/Linux ..."
    Nov 08, 2018 | stackoverflow.com

    stefanB , May 28, 2009 at 2:03

    I have this string stored in a variable:
    IN="bla@some.com;john@home.com"
    

    Now I would like to split the strings by ; delimiter so that I have:

    ADDR1="bla@some.com"
    ADDR2="john@home.com"
    

    I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.


    After suggestions from the answers below, I ended up with the following which is what I was after:

    #!/usr/bin/env bash
    
    IN="bla@some.com;john@home.com"
    
    mails=$(echo $IN | tr ";" "\n")
    
    for addr in $mails
    do
        echo "> [$addr]"
    done
    

    Output:

    > [bla@some.com]
    > [john@home.com]
    

    There was a solution involving setting Internal_field_separator (IFS) to ; . I am not sure what happened with that answer, how do you reset IFS back to default?

    RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:

    IN="bla@some.com;john@home.com"
    
    OIFS=$IFS
    IFS=';'
    mails2=$IN
    for x in $mails2
    do
        echo "> [$x]"
    done
    
    IFS=$OIFS
    

    BTW, when I tried

    mails2=($IN)
    

    I only got the first string when printing it in loop, without brackets around $IN it works.

    Brooks Moses , May 1, 2012 at 1:26

    With regards to your "Edit2": You can simply "unset IFS" and it will return to the default state. There's no need to save and restore it explicitly unless you have some reason to expect that it's already been set to a non-default value. Moreover, if you're doing this inside a function (and, if you aren't, why not?), you can set IFS as a local variable and it will return to its previous value once you exit the function. – Brooks Moses May 1 '12 at 1:26

    dubiousjim , May 31, 2012 at 5:21

    @BrooksMoses: (a) +1 for using local IFS=... where possible; (b) -1 for unset IFS , this doesn't exactly reset IFS to its default value, though I believe an unset IFS behaves the same as the default value of IFS ($' \t\n'), however it seems bad practice to be assuming blindly that your code will never be invoked with IFS set to a custom value; (c) another idea is to invoke a subshell: (IFS=$custom; ...) when the subshell exits IFS will return to whatever it was originally. – dubiousjim May 31 '12 at 5:21

    nicooga , Mar 7, 2016 at 15:32

    I just want to have a quick look at the paths to decide where to throw an executable, so I resorted to run ruby -e "puts ENV.fetch('PATH').split(':')" . If you want to stay pure bash won't help but using any scripting language that has a built-in split is easier. – nicooga Mar 7 '16 at 15:32

    Jeff , Apr 22 at 17:51

    This is kind of a drive-by comment, but since the OP used email addresses as the example, has anyone bothered to answer it in a way that is fully RFC 5322 compliant, namely that any quoted string can appear before the @ which means you're going to need regular expressions or some other kind of parser instead of naive use of IFS or other simplistic splitter functions. – Jeff Apr 22 at 17:51

    user2037659 , Apr 26 at 20:15

    for x in $(IFS=';';echo $IN); do echo "> [$x]"; doneuser2037659 Apr 26 at 20:15

    Johannes Schaub - litb , May 28, 2009 at 2:23

    You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
    IFS=';' read -ra ADDR <<< "$IN"
    for i in "${ADDR[@]}"; do
        # process "$i"
    done
    

    It will parse one line of items separated by ; , pushing it into an array. Stuff for processing whole of $IN , each time one line of input separated by ; :

     while IFS=';' read -ra ADDR; do
          for i in "${ADDR[@]}"; do
              # process "$i"
          done
     done <<< "$IN"
    

    Chris Lutz , May 28, 2009 at 2:25

    This is probably the best way. How long will IFS persist in it's current value, can it mess up my code by being set when it shouldn't be, and how can I reset it when I'm done with it? – Chris Lutz May 28 '09 at 2:25

    Johannes Schaub - litb , May 28, 2009 at 3:04

    now after the fix applied, only within the duration of the read command :) – Johannes Schaub - litb May 28 '09 at 3:04

    lhunath , May 28, 2009 at 6:14

    You can read everything at once without using a while loop: read -r -d '' -a addr <<< "$in" # The -d '' is key here, it tells read not to stop at the first newline (which is the default -d) but to continue until EOF or a NULL byte (which only occur in binary data). – lhunath May 28 '09 at 6:14

    Charles Duffy , Jul 6, 2013 at 14:39

    @LucaBorrione Setting IFS on the same line as the read with no semicolon or other separator, as opposed to in a separate command, scopes it to that command -- so it's always "restored"; you don't need to do anything manually. – Charles Duffy Jul 6 '13 at 14:39

    chepner , Oct 2, 2014 at 3:50

    @imagineerThis There is a bug involving herestrings and local changes to IFS that requires $IN to be quoted. The bug is fixed in bash 4.3. – chepner Oct 2 '14 at 3:50

    palindrom , Mar 10, 2011 at 9:00

    Taken from Bash shell script split array :
    IN="bla@some.com;john@home.com"
    arrIN=(${IN//;/ })
    

    Explanation:

    This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).

    The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion .

    There are some common gotchas:

    1. If the original string has spaces, you will need to use IFS :
      • IFS=':'; arrIN=($IN); unset IFS;
    2. If the original string has spaces and the delimiter is a new line, you can set IFS with:
      • IFS=$'\n'; arrIN=($IN); unset IFS;

    Oz123 , Mar 21, 2011 at 18:50

    I just want to add: this is the simplest of all, you can access array elements with ${arrIN[1]} (starting from zeros of course) – Oz123 Mar 21 '11 at 18:50

    KomodoDave , Jan 5, 2012 at 15:13

    Found it: the technique of modifying a variable within a ${} is known as 'parameter expansion'. – KomodoDave Jan 5 '12 at 15:13

    qbolec , Feb 25, 2013 at 9:12

    Does it work when the original string contains spaces? – qbolec Feb 25 '13 at 9:12

    Ethan , Apr 12, 2013 at 22:47

    No, I don't think this works when there are also spaces present... it's converting the ',' to ' ' and then building a space-separated array. – Ethan Apr 12 '13 at 22:47

    Charles Duffy , Jul 6, 2013 at 14:39

    This is a bad approach for other reasons: For instance, if your string contains ;*; , then the * will be expanded to a list of filenames in the current directory. -1 – Charles Duffy Jul 6 '13 at 14:39

    Chris Lutz , May 28, 2009 at 2:09

    If you don't mind processing them immediately, I like to do this:
    for i in $(echo $IN | tr ";" "\n")
    do
      # process
    done
    

    You could use this kind of loop to initialize an array, but there's probably an easier way to do it. Hope this helps, though.

    Chris Lutz , May 28, 2009 at 2:42

    You should have kept the IFS answer. It taught me something I didn't know, and it definitely made an array, whereas this just makes a cheap substitute. – Chris Lutz May 28 '09 at 2:42

    Johannes Schaub - litb , May 28, 2009 at 2:59

    I see. Yeah i find doing these silly experiments, i'm going to learn new things each time i'm trying to answer things. I've edited stuff based on #bash IRC feedback and undeleted :) – Johannes Schaub - litb May 28 '09 at 2:59

    lhunath , May 28, 2009 at 6:12

    -1, you're obviously not aware of wordsplitting, because it's introducing two bugs in your code. one is when you don't quote $IN and the other is when you pretend a newline is the only delimiter used in wordsplitting. You are iterating over every WORD in IN, not every line, and DEFINATELY not every element delimited by a semicolon, though it may appear to have the side-effect of looking like it works. – lhunath May 28 '09 at 6:12

    Johannes Schaub - litb , May 28, 2009 at 17:00

    You could change it to echo "$IN" | tr ';' '\n' | while read -r ADDY; do # process "$ADDY"; done to make him lucky, i think :) Note that this will fork, and you can't change outer variables from within the loop (that's why i used the <<< "$IN" syntax) then – Johannes Schaub - litb May 28 '09 at 17:00

    mklement0 , Apr 24, 2013 at 14:13

    To summarize the debate in the comments: Caveats for general use : the shell applies word splitting and expansions to the string, which may be undesired; just try it with. IN="bla@some.com;john@home.com;*;broken apart" . In short: this approach will break, if your tokens contain embedded spaces and/or chars. such as * that happen to make a token match filenames in the current folder. – mklement0 Apr 24 '13 at 14:13

    F. Hauri , Apr 13, 2013 at 14:20

    Compatible answer

    To this SO question, there is already a lot of different way to do this in bash . But bash has many special features, so called bashism that work well, but that won't work in any other shell .

    In particular, arrays , associative array , and pattern substitution are pure bashisms and may not work under other shells .

    On my Debian GNU/Linux , there is a standard shell called dash , but I know many people who like to use ksh .

    Finally, in very small situation, there is a special tool called busybox with his own shell interpreter ( ash ).

    Requested string

    The string sample in SO question is:

    IN="bla@some.com;john@home.com"
    

    As this could be useful with whitespaces and as whitespaces could modify the result of the routine, I prefer to use this sample string:

     IN="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    
    Split string based on delimiter in bash (version >=4.2)

    Under pure bash, we may use arrays and IFS :

    var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    
    oIFS="$IFS"
    IFS=";"
    declare -a fields=($var)
    IFS="$oIFS"
    unset oIFS
    
    
    IFS=\; read -a fields <<<"$var"
    

    Using this syntax under recent bash don't change $IFS for current session, but only for the current command:

    set | grep ^IFS=
    IFS=$' \t\n'
    

    Now the string var is split and stored into an array (named fields ):

    set | grep ^fields=\\\|^var=
    fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
    var='bla@some.com;john@home.com;Full Name <fulnam@other.org>'
    

    We could request for variable content with declare -p :

    declare -p var fields
    declare -- var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    declare -a fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
    

    read is the quickiest way to do the split, because there is no forks and no external resources called.

    From there, you could use the syntax you already know for processing each field:

    for x in "${fields[@]}";do
        echo "> [$x]"
        done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    or drop each field after processing (I like this shifting approach):

    while [ "$fields" ] ;do
        echo "> [$fields]"
        fields=("${fields[@]:1}")
        done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    or even for simple printout (shorter syntax):

    printf "> [%s]\n" "${fields[@]}"
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    
    Split string based on delimiter in shell

    But if you would write something usable under many shells, you have to not use bashisms .

    There is a syntax, used in many shells, for splitting a string across first or last occurrence of a substring:

    ${var#*SubStr}  # will drop begin of string up to first occur of `SubStr`
    ${var##*SubStr} # will drop begin of string up to last occur of `SubStr`
    ${var%SubStr*}  # will drop part of string from last occur of `SubStr` to the end
    ${var%%SubStr*} # will drop part of string from first occur of `SubStr` to the end
    

    (The missing of this is the main reason of my answer publication ;)

    As pointed out by Score_Under :

    # and % delete the shortest possible matching string, and

    ## and %% delete the longest possible.

    This little sample script work well under bash , dash , ksh , busybox and was tested under Mac-OS's bash too:

    var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    while [ "$var" ] ;do
        iter=${var%%;*}
        echo "> [$iter]"
        [ "$var" = "$iter" ] && \
            var='' || \
            var="${var#*;}"
      done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    Have fun!

    Score_Under , Apr 28, 2015 at 16:58

    The # , ## , % , and %% substitutions have what is IMO an easier explanation to remember (for how much they delete): # and % delete the shortest possible matching string, and ## and %% delete the longest possible. – Score_Under Apr 28 '15 at 16:58

    sorontar , Oct 26, 2016 at 4:36

    The IFS=\; read -a fields <<<"$var" fails on newlines and add a trailing newline. The other solution removes a trailing empty field. – sorontar Oct 26 '16 at 4:36

    Eric Chen , Aug 30, 2017 at 17:50

    The shell delimiter is the most elegant answer, period. – Eric Chen Aug 30 '17 at 17:50

    sancho.s , Oct 4 at 3:42

    Could the last alternative be used with a list of field separators set somewhere else? For instance, I mean to use this as a shell script, and pass a list of field separators as a positional parameter. – sancho.s Oct 4 at 3:42

    F. Hauri , Oct 4 at 7:47

    Yes, in a loop: for sep in "#" "ł" "@" ; do ... var="${var#*$sep}" ...F. Hauri Oct 4 at 7:47

    DougW , Apr 27, 2015 at 18:20

    I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.

    In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.

    Example:

    $ echo "bla@some.com;john@home.com" | cut -d ";" -f 1
    bla@some.com
    $ echo "bla@some.com;john@home.com" | cut -d ";" -f 2
    john@home.com
    

    You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.

    This gets more useful when you have a delimited log file with rows like this:

    2015-04-27|12345|some action|an attribute|meta data
    

    cut is very handy to be able to cat this file and select a particular field for further processing.

    MisterMiyagi , Nov 2, 2016 at 8:42

    Kudos for using cut , it's the right tool for the job! Much cleared than any of those shell hacks. – MisterMiyagi Nov 2 '16 at 8:42

    uli42 , Sep 14, 2017 at 8:30

    This approach will only work if you know the number of elements in advance; you'd need to program some more logic around it. It also runs an external tool for every element. – uli42 Sep 14 '17 at 8:30

    Louis Loudog Trottier , May 10 at 4:20

    Excatly waht i was looking for trying to avoid empty string in a csv. Now i can point the exact 'column' value as well. Work with IFS already used in a loop. Better than expected for my situation. – Louis Loudog Trottier May 10 at 4:20

    , May 28, 2009 at 10:31

    How about this approach:
    IN="bla@some.com;john@home.com" 
    set -- "$IN" 
    IFS=";"; declare -a Array=($*) 
    echo "${Array[@]}" 
    echo "${Array[0]}" 
    echo "${Array[1]}"
    

    Source

    Yzmir Ramirez , Sep 5, 2011 at 1:06

    +1 ... but I wouldn't name the variable "Array" ... pet peev I guess. Good solution. – Yzmir Ramirez Sep 5 '11 at 1:06

    ata , Nov 3, 2011 at 22:33

    +1 ... but the "set" and declare -a are unnecessary. You could as well have used just IFS";" && Array=($IN)ata Nov 3 '11 at 22:33

    Luca Borrione , Sep 3, 2012 at 9:26

    +1 Only a side note: shouldn't it be recommendable to keep the old IFS and then restore it? (as shown by stefanB in his edit3) people landing here (sometimes just copying and pasting a solution) might not think about this – Luca Borrione Sep 3 '12 at 9:26

    Charles Duffy , Jul 6, 2013 at 14:44

    -1: First, @ata is right that most of the commands in this do nothing. Second, it uses word-splitting to form the array, and doesn't do anything to inhibit glob-expansion when doing so (so if you have glob characters in any of the array elements, those elements are replaced with matching filenames). – Charles Duffy Jul 6 '13 at 14:44

    John_West , Jan 8, 2016 at 12:29

    Suggest to use $'...' : IN=$'bla@some.com;john@home.com;bet <d@\ns* kl.com>' . Then echo "${Array[2]}" will print a string with newline. set -- "$IN" is also neccessary in this case. Yes, to prevent glob expansion, the solution should include set -f . – John_West Jan 8 '16 at 12:29

    Steven Lizarazo , Aug 11, 2016 at 20:45

    This worked for me:
    string="1;2"
    echo $string | cut -d';' -f1 # output is 1
    echo $string | cut -d';' -f2 # output is 2
    

    Pardeep Sharma , Oct 10, 2017 at 7:29

    this is sort and sweet :) – Pardeep Sharma Oct 10 '17 at 7:29

    space earth , Oct 17, 2017 at 7:23

    Thanks...Helped a lot – space earth Oct 17 '17 at 7:23

    mojjj , Jan 8 at 8:57

    cut works only with a single char as delimiter. – mojjj Jan 8 at 8:57

    lothar , May 28, 2009 at 2:12

    echo "bla@some.com;john@home.com" | sed -e 's/;/\n/g'
    bla@some.com
    john@home.com
    

    Luca Borrione , Sep 3, 2012 at 10:08

    -1 what if the string contains spaces? for example IN="this is first line; this is second line" arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) will produce an array of 8 elements in this case (an element for each word space separated), rather than 2 (an element for each line semi colon separated) – Luca Borrione Sep 3 '12 at 10:08

    lothar , Sep 3, 2012 at 17:33

    @Luca No the sed script creates exactly two lines. What creates the multiple entries for you is when you put it into a bash array (which splits on white space by default) – lothar Sep 3 '12 at 17:33

    Luca Borrione , Sep 4, 2012 at 7:09

    That's exactly the point: the OP needs to store entries into an array to loop over it, as you can see in his edits. I think your (good) answer missed to mention to use arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) to achieve that, and to advice to change IFS to IFS=$'\n' for those who land here in the future and needs to split a string containing spaces. (and to restore it back afterwards). :) – Luca Borrione Sep 4 '12 at 7:09

    lothar , Sep 4, 2012 at 16:55

    @Luca Good point. However the array assignment was not in the initial question when I wrote up that answer. – lothar Sep 4 '12 at 16:55

    Ashok , Sep 8, 2012 at 5:01

    This also works:
    IN="bla@some.com;john@home.com"
    echo ADD1=`echo $IN | cut -d \; -f 1`
    echo ADD2=`echo $IN | cut -d \; -f 2`
    

    Be careful, this solution is not always correct. In case you pass "bla@some.com" only, it will assign it to both ADD1 and ADD2.

    fersarr , Mar 3, 2016 at 17:17

    You can use -s to avoid the mentioned problem: superuser.com/questions/896800/ "-f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified" – fersarr Mar 3 '16 at 17:17

    Tony , Jan 14, 2013 at 6:33

    I think AWK is the best and efficient command to resolve your problem. AWK is included in Bash by default in almost every Linux distribution.
    echo "bla@some.com;john@home.com" | awk -F';' '{print $1,$2}'
    

    will give

    bla@some.com john@home.com
    

    Of course your can store each email address by redefining the awk print field.

    Jaro , Jan 7, 2014 at 21:30

    Or even simpler: echo "bla@some.com;john@home.com" | awk 'BEGIN{RS=";"} {print}' – Jaro Jan 7 '14 at 21:30

    Aquarelle , May 6, 2014 at 21:58

    @Jaro This worked perfectly for me when I had a string with commas and needed to reformat it into lines. Thanks. – Aquarelle May 6 '14 at 21:58

    Eduardo Lucio , Aug 5, 2015 at 12:59

    It worked in this scenario -> "echo "$SPLIT_0" | awk -F' inode=' '{print $1}'"! I had problems when trying to use atrings (" inode=") instead of characters (";"). $ 1, $ 2, $ 3, $ 4 are set as positions in an array! If there is a way of setting an array... better! Thanks! – Eduardo Lucio Aug 5 '15 at 12:59

    Tony , Aug 6, 2015 at 2:42

    @EduardoLucio, what I'm thinking about is maybe you can first replace your delimiter inode= into ; for example by sed -i 's/inode\=/\;/g' your_file_to_process , then define -F';' when apply awk , hope that can help you. – Tony Aug 6 '15 at 2:42

    nickjb , Jul 5, 2011 at 13:41

    A different take on Darron's answer , this is how I do it:
    IN="bla@some.com;john@home.com"
    read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)
    

    ColinM , Sep 10, 2011 at 0:31

    This doesn't work. – ColinM Sep 10 '11 at 0:31

    nickjb , Oct 6, 2011 at 15:33

    I think it does! Run the commands above and then "echo $ADDR1 ... $ADDR2" and i get "bla@some.com ... john@home.com" output – nickjb Oct 6 '11 at 15:33

    Nick , Oct 28, 2011 at 14:36

    This worked REALLY well for me... I used it to itterate over an array of strings which contained comma separated DB,SERVER,PORT data to use mysqldump. – Nick Oct 28 '11 at 14:36

    dubiousjim , May 31, 2012 at 5:28

    Diagnosis: the IFS=";" assignment exists only in the $(...; echo $IN) subshell; this is why some readers (including me) initially think it won't work. I assumed that all of $IN was getting slurped up by ADDR1. But nickjb is correct; it does work. The reason is that echo $IN command parses its arguments using the current value of $IFS, but then echoes them to stdout using a space delimiter, regardless of the setting of $IFS. So the net effect is as though one had called read ADDR1 ADDR2 <<< "bla@some.com john@home.com" (note the input is space-separated not ;-separated). – dubiousjim May 31 '12 at 5:28

    sorontar , Oct 26, 2016 at 4:43

    This fails on spaces and newlines, and also expand wildcards * in the echo $IN with an unquoted variable expansion. – sorontar Oct 26 '16 at 4:43

    gniourf_gniourf , Jun 26, 2014 at 9:11

    In Bash, a bullet proof way, that will work even if your variable contains newlines:
    IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    

    Look:

    $ in=$'one;two three;*;there is\na newline\nin this field'
    $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    $ declare -p array
    declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
    a newline
    in this field")'
    

    The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in , with no trailing newline thanks to printf . Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:

    $ in='one;two;three;'    # there's an empty field
    $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    $ declare -p array
    declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'
    

    the trailing empty field is preserved.


    Update for Bash≥4.4

    Since Bash 4.4, the builtin mapfile (aka readarray ) supports the -d option to specify a delimiter. Hence another canonical way is:

    mapfile -d ';' -t array < <(printf '%s;' "$in")
    

    John_West , Jan 8, 2016 at 12:10

    I found it as the rare solution on that list that works correctly with \n , spaces and * simultaneously. Also, no loops; array variable is accessible in the shell after execution (contrary to the highest upvoted answer). Note, in=$'...' , it does not work with double quotes. I think, it needs more upvotes. – John_West Jan 8 '16 at 12:10

    Darron , Sep 13, 2010 at 20:10

    How about this one liner, if you're not using arrays:
    IFS=';' read ADDR1 ADDR2 <<<$IN
    

    dubiousjim , May 31, 2012 at 5:36

    Consider using read -r ... to ensure that, for example, the two characters "\t" in the input end up as the same two characters in your variables (instead of a single tab char). – dubiousjim May 31 '12 at 5:36

    Luca Borrione , Sep 3, 2012 at 10:07

    -1 This is not working here (ubuntu 12.04). Adding echo "ADDR1 $ADDR1"\n echo "ADDR2 $ADDR2" to your snippet will output ADDR1 bla@some.com john@home.com\nADDR2 (\n is newline) – Luca Borrione Sep 3 '12 at 10:07

    chepner , Sep 19, 2015 at 13:59

    This is probably due to a bug involving IFS and here strings that was fixed in bash 4.3. Quoting $IN should fix it. (In theory, $IN is not subject to word splitting or globbing after it expands, meaning the quotes should be unnecessary. Even in 4.3, though, there's at least one bug remaining--reported and scheduled to be fixed--so quoting remains a good idea.) – chepner Sep 19 '15 at 13:59

    sorontar , Oct 26, 2016 at 4:55

    This breaks if $in contain newlines even if $IN is quoted. And adds a trailing newline. – sorontar Oct 26 '16 at 4:55

    kenorb , Sep 11, 2015 at 20:54

    Here is a clean 3-liner:
    in="foo@bar;bizz@buzz;fizz@buzz;buzz@woof"
    IFS=';' list=($in)
    for item in "${list[@]}"; do echo $item; done
    

    where IFS delimit words based on the separator and () is used to create an array . Then [@] is used to return each item as a separate word.

    If you've any code after that, you also need to restore $IFS , e.g. unset IFS .

    sorontar , Oct 26, 2016 at 5:03

    The use of $in unquoted allows wildcards to be expanded. – sorontar Oct 26 '16 at 5:03

    user2720864 , Sep 24 at 13:46

    + for the unset command – user2720864 Sep 24 at 13:46

    Emilien Brigand , Aug 1, 2016 at 13:15

    Without setting the IFS

    If you just have one colon you can do that:

    a="foo:bar"
    b=${a%:*}
    c=${a##*:}
    

    you will get:

    b = foo
    c = bar
    

    Victor Choy , Sep 16, 2015 at 3:34

    There is a simple and smart way like this:
    echo "add:sfff" | xargs -d: -i  echo {}
    

    But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :

    brew install findutils
    

    then

    echo "add:sfff" | gxargs -d: -i  echo {}
    

    Halle Knast , May 24, 2017 at 8:42

    The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
    split() {
        local string="$1"
        local delimiter="$2"
        if [ -n "$string" ]; then
            local part
            while read -d "$delimiter" part; do
                echo $part
            done <<< "$string"
            echo $part
        fi
    }
    

    For instance, the command

    $ split 'a;b;c' ';'
    

    yields

    a
    b
    c
    

    This output may, for instance, be piped to other commands. Example:

    $ split 'a;b;c' ';' | cat -n
    1   a
    2   b
    3   c
    

    Compared to the other solutions given, this one has the following advantages:

    If desired, the function may be put into a script as follows:

    #!/usr/bin/env bash
    
    split() {
        # ...
    }
    
    split "$@"
    

    sandeepkunkunuru , Oct 23, 2017 at 16:10

    works and neatly modularized. – sandeepkunkunuru Oct 23 '17 at 16:10

    Prospero , Sep 25, 2011 at 1:09

    This is the simplest way to do it.
    spo='one;two;three'
    OIFS=$IFS
    IFS=';'
    spo_array=($spo)
    IFS=$OIFS
    echo ${spo_array[*]}
    

    rashok , Oct 25, 2016 at 12:41

    IN="bla@some.com;john@home.com"
    IFS=';'
    read -a IN_arr <<< "${IN}"
    for entry in "${IN_arr[@]}"
    do
        echo $entry
    done
    

    Output

    bla@some.com
    john@home.com
    

    System : Ubuntu 12.04.1

    codeforester , Jan 2, 2017 at 5:37

    IFS is not getting set in the specific context of read here and hence it can upset rest of the code, if any. – codeforester Jan 2 '17 at 5:37

    shuaihanhungry , Jan 20 at 15:54

    you can apply awk to many situations
    echo "bla@some.com;john@home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'
    

    also you can use this

    echo "bla@some.com;john@home.com"|awk -F';' '{print $1,$2}' OFS="\n"
    

    ghost , Apr 24, 2013 at 13:13

    If no space, Why not this?
    IN="bla@some.com;john@home.com"
    arr=(`echo $IN | tr ';' ' '`)
    
    echo ${arr[0]}
    echo ${arr[1]}
    

    eukras , Oct 22, 2012 at 7:10

    There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
    IN="bla@some.com;john@home.com"
    declare -a a="(${IN/;/ })";
    

    Now ${a[0]} , ${a[1]} , etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:

    for i in ${a[*]}; do echo $i; done
    

    IMPORTANT NOTE:

    This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.

    olibre , Oct 7, 2013 at 13:33

    Does not work when IN contains more than two e-mail addresses. Please refer to same idea (but fixed) at palindrom's answerolibre Oct 7 '13 at 13:33

    sorontar , Oct 26, 2016 at 5:14

    Better use ${IN//;/ } (double slash) to make it also work with more than two values. Beware that any wildcard ( *?[ ) will be expanded. And a trailing empty field will be discarded. – sorontar Oct 26 '16 at 5:14

    jeberle , Apr 30, 2013 at 3:10

    Use the set built-in to load up the $@ array:
    IN="bla@some.com;john@home.com"
    IFS=';'; set $IN; IFS=$' \t\n'
    

    Then, let the party begin:

    echo $#
    for a; do echo $a; done
    ADDR1=$1 ADDR2=$2
    

    sorontar , Oct 26, 2016 at 5:17

    Better use set -- $IN to avoid some issues with "$IN" starting with dash. Still, the unquoted expansion of $IN will expand wildcards ( *?[ ). – sorontar Oct 26 '16 at 5:17

    NevilleDNZ , Sep 2, 2013 at 6:30

    Two bourne-ish alternatives where neither require bash arrays:

    Case 1 : Keep it nice and simple: Use a NewLine as the Record-Separator... eg.

    IN="bla@some.com
    john@home.com"
    
    while read i; do
      # process "$i" ... eg.
        echo "[email:$i]"
    done <<< "$IN"
    

    Note: in this first case no sub-process is forked to assist with list manipulation.

    Idea: Maybe it is worth using NL extensively internally , and only converting to a different RS when generating the final result externally .

    Case 2 : Using a ";" as a record separator... eg.

    NL="
    " IRS=";" ORS=";"
    
    conv_IRS() {
      exec tr "$1" "$NL"
    }
    
    conv_ORS() {
      exec tr "$NL" "$1"
    }
    
    IN="bla@some.com;john@home.com"
    IN="$(conv_IRS ";" <<< "$IN")"
    
    while read i; do
      # process "$i" ... eg.
        echo -n "[email:$i]$ORS"
    done <<< "$IN"
    

    In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }

    fedorqui , Jan 8, 2015 at 10:21

    Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk :
    awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
    

    This sets the field separator to ; , so that it can loop through the fields with a for loop and print accordingly.

    Test
    $ IN="bla@some.com;john@home.com"
    $ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
    > [bla@some.com]
    > [john@home.com]
    

    With another input:

    $ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c   d;e_;f"
    > [a]
    > [b]
    > [c   d]
    > [e_]
    > [f]
    

    18446744073709551615 , Feb 20, 2015 at 10:49

    In Android shell, most of the proposed methods just do not work:
    $ IFS=':' read -ra ADDR <<<"$PATH"                             
    /system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory
    

    What does work is:

    $ for i in ${PATH//:/ }; do echo $i; done
    /sbin
    /vendor/bin
    /system/sbin
    /system/bin
    /system/xbin
    

    where // means global replacement.

    sorontar , Oct 26, 2016 at 5:08

    Fails if any part of $PATH contains spaces (or newlines). Also expands wildcards (asterisk *, question mark ? and braces [ ]). – sorontar Oct 26 '16 at 5:08

    Eduardo Lucio , Apr 4, 2016 at 19:54

    Okay guys!

    Here's my answer!

    DELIMITER_VAL='='
    
    read -d '' F_ABOUT_DISTRO_R <<"EOF"
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=14.04
    DISTRIB_CODENAME=trusty
    DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
    NAME="Ubuntu"
    VERSION="14.04.4 LTS, Trusty Tahr"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 14.04.4 LTS"
    VERSION_ID="14.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    EOF
    
    SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
    while read -r line; do
       SPLIT+=("$line")
    done <<< "$SPLIT_NOW"
    for i in "${SPLIT[@]}"; do
        echo "$i"
    done
    

    Why this approach is "the best" for me?

    Because of two reasons:

    1. You do not need to escape the delimiter;
    2. You will not have problem with blank spaces . The value will be properly separated in the array!

    []'s

    gniourf_gniourf , Jan 30, 2017 at 8:26

    FYI, /etc/os-release and /etc/lsb-release are meant to be sourced, and not parsed. So your method is really wrong. Moreover, you're not quite answering the question about spiltting a string on a delimiter.gniourf_gniourf Jan 30 '17 at 8:26

    Michael Hale , Jun 14, 2012 at 17:38

    A one-liner to split a string separated by ';' into an array is:
    IN="bla@some.com;john@home.com"
    ADDRS=( $(IFS=";" echo "$IN") )
    echo ${ADDRS[0]}
    echo ${ADDRS[1]}
    

    This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

    Luca Borrione , Sep 3, 2012 at 10:04

    -1 this doesn't work here (ubuntu 12.04). it prints only the first echo with all $IN value in it, while the second is empty. you can see it if you put echo "0: "${ADDRS[0]}\n echo "1: "${ADDRS[1]} the output is 0: bla@some.com;john@home.com\n 1: (\n is new line) – Luca Borrione Sep 3 '12 at 10:04

    Luca Borrione , Sep 3, 2012 at 10:05

    please refer to nickjb's answer at for a working alternative to this idea stackoverflow.com/a/6583589/1032370 – Luca Borrione Sep 3 '12 at 10:05

    Score_Under , Apr 28, 2015 at 17:09

    -1, 1. IFS isn't being set in that subshell (it's being passed to the environment of "echo", which is a builtin, so nothing is happening anyway). 2. $IN is quoted so it isn't subject to IFS splitting. 3. The process substitution is split by whitespace, but this may corrupt the original data. – Score_Under Apr 28 '15 at 17:09

    ajaaskel , Oct 10, 2014 at 11:33

    IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
    set -f
    oldifs="$IFS"
    IFS=';'; arrayIN=($IN)
    IFS="$oldifs"
    for i in "${arrayIN[@]}"; do
    echo "$i"
    done
    set +f
    

    Output:

    bla@some.com
    john@home.com
    Charlie Brown <cbrown@acme.com
    !"#$%&/()[]{}*? are no problem
    simple is beautiful :-)
    

    Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual. Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.

    IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.

    gniourf_gniourf , Feb 20, 2015 at 16:45

    !"#$%&/()[]{}*? are no problem well... not quite: []*? are glob characters. So what about creating this directory and file: `mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' and running your command? simple may be beautiful, but when it's broken, it's broken. – gniourf_gniourf Feb 20 '15 at 16:45

    ajaaskel , Feb 25, 2015 at 7:20

    @gniourf_gniourf The string is stored in a variable. Please see the original question. – ajaaskel Feb 25 '15 at 7:20

    gniourf_gniourf , Feb 25, 2015 at 7:26

    @ajaaskel you didn't fully understand my comment. Go in a scratch directory and issue these commands: mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' . They will only create a directory and a file, with weird looking names, I must admit. Then run your commands with the exact IN you gave: IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)' . You'll see that you won't get the output you expect. Because you're using a method subject to pathname expansions to split your string. – gniourf_gniourf Feb 25 '15 at 7:26

    gniourf_gniourf , Feb 25, 2015 at 7:29

    This is to demonstrate that the characters * , ? , [...] and even, if extglob is set, !(...) , @(...) , ?(...) , +(...) are problems with this method! – gniourf_gniourf Feb 25 '15 at 7:29

    ajaaskel , Feb 26, 2015 at 15:26

    @gniourf_gniourf Thanks for detailed comments on globbing. I adjusted the code to have globbing off. My point was however just to show that rather simple assignment can do the splitting job. – ajaaskel Feb 26 '15 at 15:26

    > , Dec 19, 2013 at 21:39

    Maybe not the most elegant solution, but works with * and spaces:
    IN="bla@so me.com;*;john@home.com"
    for i in `delims=${IN//[^;]}; seq 1 $((${#delims} + 1))`
    do
       echo "> [`echo $IN | cut -d';' -f$i`]"
    done
    

    Outputs

    > [bla@so me.com]
    > [*]
    > [john@home.com]
    

    Other example (delimiters at beginning and end):

    IN=";bla@so me.com;*;john@home.com;"
    > []
    > [bla@so me.com]
    > [*]
    > [john@home.com]
    > []
    

    Basically it removes every character other than ; making delims eg. ;;; . Then it does for loop from 1 to number-of-delimiters as counted by ${#delims} . The final step is to safely get the $i th part using cut .

    [Nov 08, 2018] 15 Linux Split and Join Command Examples to Manage Large Files

    Nov 08, 2018 | www.thegeekstuff.com

    by Himanshu Arora on October 16, 2012

    https://apis.google.com/se/0/_/+1/fastbutton?usegapi=1&size=medium&origin=https%3A%2F%2Fwww.thegeekstuff.com&url=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&gsrc=3p&jsh=m%3B%2F_%2Fscs%2Fapps-static%2F_%2Fjs%2Fk%3Doz.gapi.en_US.f5JujS1eFMY.O%2Fm%3D__features__%2Fam%3DQQE%2Frt%3Dj%2Fd%3D1%2Frs%3DAGLTcCNDI1_ftdVIpg6jNiygedEKTreQ2A#_methods=onPlusOne%2C_ready%2C_close%2C_open%2C_resizeMe%2C_renderstart%2Concircled%2Cdrefresh%2Cerefresh&id=I0_1529064634502&_gfid=I0_1529064634502&parent=https%3A%2F%2Fwww.thegeekstuff.com&pfname=&rpctoken=68750732

    https://www.facebook.com/plugins/like.php?href=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&send=false&layout=button_count&width=450&show_faces=false&action=like&colorscheme=light&font&height=21

    https://platform.twitter.com/widgets/tweet_button.c5b006ac082bc92aa829181b9ce63af1.en.html#dnt=false&id=twitter-widget-0&lang=en&original_referer=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&size=m&text=15%20Linux%20Split%20and%20Join%20Command%20Examples%20to%20Manage%20Large%20Files&time=1529064635577&type=share&url=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F

    Linux split and join commands are very helpful when you are manipulating large files. This article explains how to use Linux split and join command with descriptive examples.

    Join and split command syntax:

    join [OPTION] FILE1 FILE2
    split [OPTION] [INPUT [PREFIX]]

    Linux Split Command Examples 1. Basic Split Example

    Here is a basic example of split command.

    $ split split.zip 
    
    $ ls
    split.zip  xab  xad  xaf  xah  xaj  xal  xan  xap  xar  xat  xav  xax  xaz  xbb  xbd  xbf  xbh  xbj  xbl  xbn
    xaa        xac  xae  xag  xai  xak  xam  xao  xaq  xas  xau  xaw  xay  xba  xbc  xbe  xbg  xbi  xbk  xbm  xbo
    

    So we see that the file split.zip was split into smaller files with x** as file names. Where ** is the two character suffix that is added by default. Also, by default each x** file would contain 1000 lines.

    $ wc -l *
       40947 split.zip
        1000 xaa
        1000 xab
        1000 xac
        1000 xad
        1000 xae
        1000 xaf
        1000 xag
        1000 xah
        1000 xai
    ...
    ...
    ...
    

    So the output above confirms that by default each x** file contains 1000 lines.

    2.Change the Suffix Length using -a option

    As discussed in example 1 above, the default suffix length is 2. But this can be changed by using -a option.

    me name=

    As you see in the following example, it is using suffix of length 5 on the split files.

    $ split -a5 split.zip
    $ ls
    split.zip  xaaaac  xaaaaf  xaaaai  xaaaal  xaaaao  xaaaar  xaaaau  xaaaax  xaaaba  xaaabd  xaaabg  xaaabj  xaaabm
    xaaaaa     xaaaad  xaaaag  xaaaaj  xaaaam  xaaaap  xaaaas  xaaaav  xaaaay  xaaabb  xaaabe  xaaabh  xaaabk  xaaabn
    xaaaab     xaaaae  xaaaah  xaaaak  xaaaan  xaaaaq  xaaaat  xaaaaw  xaaaaz  xaaabc  xaaabf  xaaabi  xaaabl  xaaabo
    

    Note: Earlier we also discussed about other file manipulation utilities – tac, rev, paste .

    3.Customize Split File Size using -b option

    Size of each output split file can be controlled using -b option.

    In this example, the split files were created with a size of 200000 bytes.

    $ split -b200000 split.zip 
    
    $ ls -lart
    total 21084
    drwxrwxr-x 3 himanshu himanshu     4096 Sep 26 21:20 ..
    -rw-rw-r-- 1 himanshu himanshu 10767315 Sep 26 21:21 split.zip
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xad
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xac
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xab
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaa
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xah
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xag
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaf
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xae
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xar
    ...
    ...
    ...
    
    4. Create Split Files with Numeric Suffix using -d option

    As seen in examples above, the output has the format of x** where ** are alphabets. You can change this to number using -d option.

    Here is an example. This has numeric suffix on the split files.

    $ split -d split.zip
    $ ls
    split.zip  x01  x03  x05  x07  x09  x11  x13  x15  x17  x19  x21  x23  x25  x27  x29  x31  x33  x35  x37  x39
    x00        x02  x04  x06  x08  x10  x12  x14  x16  x18  x20  x22  x24  x26  x28  x30  x32  x34  x36  x38  x40
    
    5. Customize the Number of Split Chunks using -C option

    To get control over the number of chunks, use the -C option.

    This example will create 50 chunks of split files.

    $ split -n50 split.zip
    $ ls
    split.zip  xac  xaf  xai  xal  xao  xar  xau  xax  xba  xbd  xbg  xbj  xbm  xbp  xbs  xbv
    xaa        xad  xag  xaj  xam  xap  xas  xav  xay  xbb  xbe  xbh  xbk  xbn  xbq  xbt  xbw
    xab        xae  xah  xak  xan  xaq  xat  xaw  xaz  xbc  xbf  xbi  xbl  xbo  xbr  xbu  xbx
    
    6. Avoid Zero Sized Chunks using -e option

    While splitting a relatively small file in large number of chunks, its good to avoid zero sized chunks as they do not add any value. This can be done using -e option.

    Here is an example:

    $ split -n50 testfile
    
    $ ls -lart x*
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xag
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xaf
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xae
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xad
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xac
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xab
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xaa
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbx
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbw
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbv
    ...
    ...
    ...
    

    So we see that lots of zero size chunks were produced in the above output. Now, lets use -e option and see the results:

    $ split -n50 -e testfile
    $ ls
    split.zip  testfile  xaa  xab  xac  xad  xae  xaf
    
    $ ls -lart x*
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xaf
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xae
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xad
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xac
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xab
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xaa
    

    So we see that no zero sized chunk was produced in the above output.

    7. Customize Number of Lines using -l option

    Number of lines per output split file can be customized using the -l option.

    As seen in the example below, split files are created with 20000 lines.

    $ split -l20000 split.zip
    
    $ ls
    split.zip  testfile  xaa  xab  xac
    
    $ wc -l x*
       20000 xaa
       20000 xab
         947 xac
       40947 total
    
    Get Detailed Information using –verbose option

    To get a diagnostic message each time a new split file is opened, use –verbose option as shown below.

    $ split -l20000 --verbose split.zip
    creating file `xaa'
    creating file `xab'
    creating file `xac'
    

    [Nov 08, 2018] Utilizing multi core for tar+gzip-bzip compression-decompression

    Nov 08, 2018 | stackoverflow.com

    Ask Question up vote 163 down vote favorite 67


    user1118764 , Sep 7, 2012 at 6:58

    I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

    I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

    Is there any way I can utilize the unused cores to make it faster?

    Warren Severin , Nov 13, 2017 at 4:37

    The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

    Mark Adler , Sep 7, 2012 at 14:48

    You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
    tar cf - paths-to-archive | pigz > archive.tar.gz
    

    By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

    tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz
    

    user788171 , Feb 20, 2013 at 12:43

    How do you use pigz to decompress in the same fashion? Or does it only work for compression? – user788171 Feb 20 '13 at 12:43

    Mark Adler , Feb 20, 2013 at 16:18

    pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression. The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores. – Mark Adler Feb 20 '13 at 16:18

    Garrett , Mar 1, 2014 at 7:26

    The hyphen here is stdout (see this page ). – Garrett Mar 1 '14 at 7:26

    Mark Adler , Jul 2, 2014 at 21:29

    Yes. 100% compatible in both directions. – Mark Adler Jul 2 '14 at 21:29

    Mark Adler , Apr 23, 2015 at 5:23

    There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. – Mark Adler Apr 23 '15 at 5:23

    Jen , Jun 14, 2013 at 14:34

    You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

    For example use:

    tar -c --use-compress-program=pigz -f tar.file dir_to_zip
    

    ranman , Nov 13, 2013 at 10:01

    This is an awesome little nugget of knowledge and deserves more upvotes. I had no idea this option even existed and I've read the man page a few times over the years. – ranman Nov 13 '13 at 10:01

    Valerio Schiavoni , Aug 5, 2014 at 22:38

    Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

    bovender , Sep 18, 2015 at 10:14

    @ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

    Valerio Schiavoni , Sep 28, 2015 at 23:41

    On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

    Offenso , Jan 11, 2017 at 17:26

    I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

    Maxim Suslov , Dec 18, 2014 at 7:31

    Common approach

    There is option for tar program:

    -I, --use-compress-program PROG
          filter through PROG (must accept -d)
    

    You can use multithread version of archiver or compressor utility.

    Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

    $ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
    $ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive
    

    Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

    $ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
    $ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz
    

    Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

    p7zip

    For p7zip for compression you need a small shell script like the following:

    #!/bin/sh
    case $1 in
      -d) 7za -txz -si -so e;;
       *) 7za -txz -si -so a .;;
    esac 2>/dev/null
    

    Save it as 7zhelper.sh. Here the example of usage:

    $ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
    $ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
    
    xz

    Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

    This is a fragment of man for 5.1.0alpha version:

    Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

    However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

    Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

    Recompiling with replacement

    If you build tar from sources, then you can recompile with parameters

    --with-gzip=pigz
    --with-bzip2=lbzip2
    --with-lzip=plzip
    

    After recompiling tar with these options you can check the output of tar's help:

    $ tar --help | grep "lbzip2\|plzip\|pigz"
      -j, --bzip2                filter the archive through lbzip2
          --lzip                 filter the archive through plzip
      -z, --gzip, --gunzip, --ungzip   filter the archive through pigz
    

    > , Apr 28, 2015 at 20:41

    This is indeed the best answer. I'll definitely rebuild my tar! – user1985657 Apr 28 '15 at 20:41

    mpibzip2 , Apr 28, 2015 at 20:57

    I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

    oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

    This is a great and elaborate answer. It may be good to mention that multithreaded compression (e.g. with pigz ) is only enabled when it reads from the file. Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

    selurvedu , May 26, 2016 at 22:13

    Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

    panticz.de , Sep 1, 2014 at 15:02

    You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
    tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/
    

    einpoklum , Feb 11, 2017 at 15:59

    A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59

    ,

    If you want to have more flexibility with filenames and compression options, you can use:
    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
    tar -P --transform='s@/my/path/@@g' -cf - {} + | \
    pigz -9 -p 4 > myarchive.tar.gz
    
    Step 1: find

    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

    This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

    -exec will execute the next command using the results of find : tar

    Step 2: tar

    tar -P --transform='s@/my/path/@@g' -cf - {} +

    --transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

    -P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

    -cf - tells tar to use the tarball name we'll specify later

    {} + uses everyfiles that find found previously

    Step 3: pigz

    pigz -9 -p 4

    Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

    Step 4: archive name

    > myarchive.tar.gz

    Finally.

    [Nov 08, 2018] Technology Detox The Health Benefits of Unplugging Unwinding by Sara Tipton

    Notable quotes:
    "... Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. ..."
    Nov 07, 2018 | www.zerohedge.com

    Authored by Sara Tipton via ReadyNutrition.com,

    Recent studies have shown that 90% of Americans use digital devices for two or more hours each day and the average American spends more time a day on high-tech devices than they do sleeping: 8 hours and 21 minutes to be exact. If you've ever considered attempting a "digital detox", there are some health benefits to making that change and a few tips to make things a little easier on yourself.

    Many Americans are on their phones rather than playing with their children or spending quality family time together. Some people give up technology, or certain aspects of it, such as social media for varying reasons, and there are some shockingly terrific health benefits that come along with that type of a detox from technology. In fact, more and more health experts and medical professionals are suggesting a periodic digital detox; an extended period without those technology gadgets. Studies continue to show that a digital detox, has proven to be beneficial for relationships, productivity, physical health, and mental health. If you find yourself overly stressed or unproductive or generally disengaged from those closest to you, it might be time to unplug.

    DIGITAL ADDICTION RESOLUTION

    It may go unnoticed but there are many who are actually addicted to their smartphones or tablet. It could be social media or YouTube videos, but these are the people who never step away. They are the ones with their face in their phone while out to dinner with their family. They can't have a quiet dinner without their phone on the table. We've seen them at the grocery store aimlessly pushing around a cart while ignoring their children and scrolling on their phone. A whopping 83% of American teenagers claim to play video games while other people are in the same room and 92% of teens report to going online daily . 24% of those users access the internet via laptops, tablets, and mobile devices.

    Addiction therapists who treat gadget-obsessed people say their patients aren't that different from other kinds of addicts. Whereas alcohol, tobacco, and drugs involve a substance that a user's body gets addicted to, in behavioral addiction, it's the mind's craving to turn to the smartphone or the Internet. Taking a break teaches us that we can live without constant stimulation, and lessens our dependence on electronics. Trust us: that Facebook message with a funny meme attached or juicy tidbit of gossip can wait.

    IMPROVE RELATIONSHIPS AND BE MORE PERSONABLE

    Another benefit to keeping all your electronics off is that it will allow you to establish good mannerisms and people skills and build your relationships to a strong level of connection. If you have ever sat across someone at the dinner table who made more phone contact than eye contact, you know it feels to take a backseat to a screen. Cell phones and other gadgets force people to look down and away from their surroundings, giving them a closed off and inaccessible (and often rude) demeanor. A digital detox has the potential of forcing you out of that unhealthy comfort zone. It could be a start toward rebuilding a struggling relationship too. In a Forbes study , 3 out of 5 people claimed that they spend more time on their digital devices than they do with their partners. This can pose a real threat to building and maintaining real-life relationships. The next time you find yourself going out on a dinner date, try leaving your cell phone and other devices at home and actually have a conversation. Your significant other will thank you.

    BETTER SLEEP AND HEALTHIER EATING HABITS

    The sleep interference caused by these high-tech gadgets is another mental health concern. The stimulation caused by artificial light can make you feel more awake than you really are, which can potentially interfere with your sleep quality. It is recommended that you give yourself at least two hours of technology-free time before bedtime. The "blue light" has been shown to interfere with sleeping patterns by inhibiting melatonin (the hormone which controls our sleep/wake cycle known as circadian rhythm) production. Try shutting off your phone after dinner and leaving it in a room other than your bedroom. Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. This will help your body readjust to a normal and healthy sleep schedule.

    Your eating habits can also suffer if you spend too much time checking your newsfeed. The Rochester Institute of Technology released a study that revealed students are more likely to eat while staring into digital media than they are to eat at a dinner table. This means that eating has now become a multi-tasking activity, rather than a social and loving experience in which healthy foods meant to sustain the body are consumed. This can prevent students from eating consciously, which promotes unhealthy eating habits such as overeating and easy choices, such as a bag of chips as opposed to washing and peeling some carrots. Whether you're an overworked college student checking your Facebook, or a single bachelor watching reruns of The Office , a digital detox is a great way to promote healthy and conscious eating.

    IMPROVE OVERALL MENTAL HEALTH

    Social media addicts experience a wide array of emotions when looking at the photos of Instagram models and the exercise regimes of others who live in exotic locations. These emotions can be mentally draining and psychologically unhealthy and lead to depression. Smartphone use has been linked to loneliness, shyness, and less engagement at work. In other words, one may have many "social media friends" while being lonely and unsatisfied because those friends are only accessible through their screen. Start by limiting your time on social media. Log out of all social media accounts. That way, you've actually got to log back in if you want to see what that Parisian Instagram vegan model is up to.

    If you feel like a detox is in order but don't know how to go about it, start off small. Try shutting off your phone after dinner and don't turn it back on until after breakfast. Keep your phone in another room besides your bedroom overnight. If you use your phone as an alarm clock, buy a cheap alarm clock to use instead to lessen your dependence on your phone. Boredom is often the biggest factor in the beginning stages of a detox, but try playing an undistracted board game with your children, leaving your phone at home during a nice dinner out, or playing with a pet. All of these things are not only good for you but good for your family and beloved furry critter as well!

    [Nov 07, 2018] Stuxnet 2.0? Iran claims Israel launched new cyber attacks

    Nov 07, 2018 | arstechnica.com

    President Rouhani's phone "bugged," attacks against network infrastructure claimed.

    Sean Gallagher - 11/5/2018, 5:10 PM

    reader comments

    Last week, Iran's chief of civil defense claimed that the Iranian government had fought off Israeli attempts to infect computer systems with what he described as a new version of Stuxnet -- the malware reportedly developed jointly by the US and Israel that targeted Iran's uranium-enrichment program. Gholamreza Jalali, chief of the National Passive Defense Organization (NPDO), told Iran's IRNA news service, "Recently, we discovered a new generation of Stuxnet which consisted of several parts... and was trying to enter our systems."

    On November 5, Iran Telecommunications Minister Mohammad-Javad Azari Jahromi accused Israel of being behind the attack, and he said that the malware was intended to "harm the country's communication infrastructures." Jahromi praised "technical teams" for shutting down the attack, saying that the attackers "returned empty-handed." A report from Iran's Tasnim news agency quoted Deputy Telecommunications Minister Hamid Fattahi as stating that more details of the cyber attacks would be made public soon.

    Jahromi said that Iran would sue Israel over the attack through the International Court of Justice. The Iranian government has also said it would sue the US in the ICJ over the reinstatement of sanctions. Israel has remained silent regarding the accusations .

    The claims come a week after the NPDO's Jalali announced that President Hassan Rouhani's cell phone had been "tapped" and was being replaced with a new, more secure device. This led to a statement by Iranian Supreme Leader Ayatollah Ali Khamenei, exhorting Iran's security apparatus to "confront infiltration through scientific, accurate, and up-to-date action."

    While Iran protests the alleged attacks -- about which the Israeli government has been silent -- Iranian hackers have continued to conduct their own cyber attacks. A recent report from security tools company Carbon Black based on data from the company's incident-response partners found that Iran had been a significant source of attacks in the third quarter of this year, with one incident-response professional noting, "We've seen a lot of destructive actions from Iran and North Korea lately, where they've effectively wiped machines they suspect of being forensically analyzed."


    SymmetricChaos </> , 2018-11-05T17:16:46-05:00 I feel like governments still think of cyber warfare as something that doesn't really count and are willing to be dangerously provocative in their use of it. ihatewinter , 2018-11-05T17:27:06-05:00 Another day in international politics. Beats lobbing bombs at each other. +13 ( +16 / -3 ) fahrenheit_ak </> , 2018-11-05T17:46:44-05:00

    corey_1967 wrote:
    The twin pillars of Iran's foreign policy - America is evil and Wipe Israel off the map - do not appear to be serving the country very well.

    They serve Iran very well, America is an easy target to gather support against, and Israel is more than willing to play the bad guy (for a bunch of reasons including Israels' policy of nuclear hegemony in the region and historical antagonism against Arab states).
    revision0 , 2018-11-05T17:48:22-05:00 Israeli hackers?

    Go on!

    Quote:

    Israeli hackers offered Cambridge Analytica, the data collection firm that worked on U.S. President Donald Trump's election campaign, material on two politicians who are heads of state, the Guardian reported Wednesday, citing witnesses.

    https://www.haaretz.com/israel-news/isr ... -1.5933977

    Quote:

    For $20M, These Israeli Hackers Will Spy On Any Phone On The Planet

    https://www.forbes.com/sites/thomasbrew ... -ulin-ss7/

    Quote:

    While Israelis are not necessarily number one in technical skills -- that award goes to Russian hackers -- Israelis are probably the best at thinking on their feet and adjusting to changing situations on the fly, a trait essential for success in a wide range of areas, including cyber-security, said Forzieri. "In modern attacks, the human factor -- for example, getting someone to click on a link that will install malware -- constitutes as much as 85% of a successful attack," he said.

    http://www.timesofisrael.com/israeli-ha ... ty-expert/

    +5 ( +9 / -4 )
    ihatewinter </> , 2018-11-05T17:52:15-05:00
    dramamoose wrote:
    thorpe wrote:
    The pro-Israel trolls out in front of this comment section...

    You don't have to be pro-Israel to be anti-Iran. Far from it. I think many of Israel's actions in Palestine are reprehensible, but I also know to (rightly) fear an Islamic dictatorship who is actively funding terrorism groups and is likely a few years away from having a working nuclear bomb, should they resume research (which the US actions seem likely to cause).

    The US created the Islamic Republic of Iran by holding a cruel dictator in power rather than risking a slide into communism. We should be engaging diplomatically, rather than trying sanctions which clearly don't work. But I don't think that the original Stuxnet was a bad idea, nor do I think that intense surveillance of what could be a potentially very dangerous country is a bad one either.

    If the Israelis (slash US) did in fact target civilian infrastructure, that's a problem. Unless, of course, they were bugging them for espionage purposes.

    Agree. While Israel is not about to win Humanitarian Nation of the year Award any time soon, I don't see it going to Iran in a close vote tally either.

    [Nov 05, 2018] Frequently is no way to judge whether individual is competent or incompetent to hold a given position. Stated another way: there is no adequate competence criterion for technical managers.

    Nov 05, 2018 | www.rako.com

    However, there is another anomaly with more interesting consequences; namely, there frequently is no way to judge whether individual is competent or incompetent to hold a given position. Stated another way: there is no adequate competence criterion for technical managers.

    Consider. for example. the manager of a small group of chemists. He asked his group to develop a nonfading system of dyes using complex organic compounds that they had been studying for some time. Eighteen months later they reported little success with dyes but had discovered a new substance that was rather effective as an insect repellent.

    Should the manager be chastised for failing to accomplish anything toward his original objective, or should he be praised for resourcefulness
    in finding something useful in the new chemical system? Was 18 months a long time or a short time for this accomplishment?

    [Nov 05, 2018] Management theories for CIOs The Peter Principle and Parkinson's Law

    Notable quotes:
    "... Josι Ortega y Gasset. ..."
    "... "Works expands so as to fill the time available for its completion." ..."
    "... "The time spent on any item of the agenda will be in inverse proportion to the sum of money involved." ..."
    "... Gφdel, Escher, Bach: An Eternal Golden Braid, ..."
    "... "It always takes longer than you expect, even when you take into account Hofstadter's Law." ..."
    "... "Anything that can go wrong, will go wrong." ..."
    "... "Anything that can go wrong, will go wrong - at the worst possible moment." ..."
    Nov 05, 2018 | cio.co.uk

    From the semi-serious to the confusingly ironic, the business world is not short of pseudo-scientific principles, laws and management theories concerning how organisations and their leaders should and should not behave. CIO UK takes a look at some sincere, irreverent and leftfield management concepts that are relevant to CIOs and all business leaders.

    The Peter Principle

    A concept formulated by Laurence J Peter in 1969, the Peter Principle runs that in a hierarchical structure, employees are promoted to their highest level of incompetence at which point they are no longer able to fulfil an effective role for their organisation.

    In the Peter Principle people are promoted when they excel, but this process falls down when they are unlikely to gain further promotion or be demoted with the logical end point, according to Peter, where "every post tends to be occupied by an employee who is incompetent to carry out its duties" and that "work is accomplished by those employees who have not yet reached their level of incompetence".

    To counter the Peter Principle leaders could seek the advice of Spanish liberal philosopher Josι Ortega y Gasset. While he died 14 years before the Peter Principle was published, Ortega had been in exile in Argentina during the Spanish Civil War and prompted by his observations in South America had quipped: "All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."

    Parkinson's Law

    Cyril Northcote Parkinson's eponymous law, derived from his extensive experience in the British Civil Service, states that: "Works expands so as to fill the time available for its completion."

    The first sentence of a humorous essay published in The Economist in 1955, Parkinson's Law is familiar with CIOs, IT teams, journalists, students, and every other occupation that can learn from Parkinson's mocking of pubic administration in the UK. The corollary law most applicable to CIOs runs that "data expands to fill the space available for storage", while Parkinson's broader work about the self-satisfying uncontrolled growth of bureaucratic apparatus is as relevant for the scaling startup as it is to the large corporate.

    Related Parkinson's Law of Triviality

    Flirting with the ground between flippancy and seriousness, Parkinson argued that boards and members of an organisation give disproportional weight to trivial issues and those that are easiest to grasp for non-experts. In his words: "The time spent on any item of the agenda will be in inverse proportion to the sum of money involved."

    Parkinson's anecdote is of a fictional finance committee's three-item agenda to cover a £10 million contract discussing the components of a new nuclear reactor, a proposal to build a new £350 bicycle shed, and finally which coffee and biscuits should be supplied at future committee meetings. While the first item on the agenda is far too complex and ironed out in two and a half minutes, 45 minutes is spent discussing bike sheds, and debates about the £21 refreshment provisions are so drawn out that the committee runs over its two-hour time allocation with a note to provide further information about coffee and biscuits to be continued at the next meeting.

    The Dilbert Principle

    Referring to a 1990s theory by popular Dilbert cartoonist Scott Adams, the Dilbert Principle runs that companies tend to promote their least competent employees to management roles to curb the amount of damage they are capable of doing to the organisation.

    Unlike the Peter Principle , which is positive in its aims by rewarding competence, the Dilbert Principle assumes people are moved to quasi-senior supervisory positions in a structure where they are less likely to have an effect on productive output of the company which is performed by those lower down the ladder.

    Hofstadter's Law

    Coined by Douglas Hofstadter in his 1979 book Gφdel, Escher, Bach: An Eternal Golden Braid, Hofstadter's Law states: "It always takes longer than you expect, even when you take into account Hofstadter's Law."

    Particularly relevant to CIOs and business leaders overseeing large projects and transformation programmes, Hofstadter's Law suggests that even appreciating your own subjective pessimism in your projected timelines, they are still worth re-evaluating.

    Related Murphy's Law

    "Anything that can go wrong, will go wrong."

    An old adage and without basis in any scientific laws or management principles, Murphy's Law is always worth bearing in mind for CIOs or when undertaking thorough scenario planning for adverse situations. It's also perhaps worth bearing in mind the corollary principle Finagle's Law , which states: "Anything that can go wrong, will go wrong - at the worst possible moment."

    Lindy Effect

    Concerning the life expectancy of non-perishable things, the Lindy Effect is as relevant to CIOs procuring new technologies or maintaining legacy infrastructure as it is to the those buying homes, used cars, a fountain pen or mobile phone.

    Harder to define than other principles and laws, the Lindy Effect suggests that mortality rate decreases with time, unlike in nature and in human beings where - after childhood - mortality rate increases with time. Ergo, every day of server uptime implies a longer remaining life expectancy.

    A corollary effect related to the Lindy Effect which is a good explanation is the Copernican Principle , which states that the future life expectancy is equal to the current age, i.e. that barring any addition evidence on the contrary, something must be halfway through its life span.

    The Lindy Effect and the idea that older things are more robust has specific relevance to CIOs beyond servers and IT infrastructure with its association with source code, where newer code will in general have lower probability of remaining within a year and an increased likelihood of causing problems compared to code written a long time ago, and in project management where the lifecycle of a project grows and its scope changes, an Agile methodology can be used to mitigate project risks and fix mistakes.

    The Jevons Paradox

    Wikipedia offers the best economic description of the Jevons Paradox or Jevons effect, in which a technological progress increases efficiency with which a resource is used, but the rate of consumption of that resource subsequently rises because of increasing demand.

    Think email, think Slack, instant messaging, printing, how easy it is to create Excel reports, coffee-making, conference calls, network and internet speeds, the list is endless. If you suspect demand in these has increased along with technological advancement negating the positive impact of said efficiency gains in the first instance, sounds like the paradox first described by William Stanley Jevons in 1865 when observing coal consumption following the introduction of the Watt steam engine.

    Ninety-Ninety Rule

    A light-hearted quip bespoke to computer programming and software development, the Ninety-Ninety Rule states that: "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." See also, Hofstadter's Law .

    Related to this is the Pareto Principle , or the 80-20 Rule, and how it relates to software, with supporting anecdotes that "20% of the code has 80% of the errors" or in load testing that it is common practice to estimate that 80% of the traffic occurs during 20% of the time.

    Pygmalion Effect and Golem Effect

    Named after the Greek myth of Pygmalion, a sculptor who fell in love with a statue he carved, and relevant to managers across industry and seniority, the Pygmalion Effect runs that higher expectations lead to an increased performance.

    Counter to the Pygmalion Effect is the Golem effect , whereby low expectations result in a decrease in performance.

    Dunning-Kruger Effect

    The Dunning-Kruger Effect , named after two psychologists from Cornell University, states that incompetent people are significantly less able to recognise their own lack of skill, the extent of their inadequacy, and even to gauge the skill of others. Furthermore, they are only able to acknowledge their own incompetence after they have been exposed to training in that skill.

    At a loss to find a better visual representation of the Dunning-Kruger Effect , here is Simon Wardley's graph with Knowledge and Expertise axes - a warning as to why self-professed experts are the worst people to listen to on a given subject.

    me title=

    See also this picture of AOL "Digital Prophet" David Shing and web developer Sir Tim Berners-Lee.

    [Nov 05, 2018] Putt's Law

    Nov 05, 2018 | davewentzel.com

    ... ... ...

    Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand. --Putt's Law

    If you are in IT and are not familiar with Archibald Putt, I suggest you stop reading this blog post, RIGHT NOW, and go buy the book Putt's Law and the Successful Technocrat. How to Win in the Information Age . Putt's Law , for short, is a combination of Dilbert and The Mythical Man-Month . It shows you exactly how managers of technologists think, how they got to where they are, and how they stay there. Just like Dilbert, you'll initially laugh, then you'll cry, because you'll realize just how true Putt's Law really is. But, unlike Dilbert, whose technologist-fans tend to have a revulsion for management, Putt tries to show the technologist how to become one of the despised. Now granted, not all of us technologists have a desire to be management, it is still useful to "know one's enemy."

    Two amazing facts:

    1. Archibald Putt is a pseudonym and his true identity has yet to be revealed. A true "Deep Throat" for us IT guys.
    2. Putt's Law was written back in 1981. It amazes me how the Old IT Classics (Putt's Law, Mythical Man-Month, anything by Knuth) are even more relevant today than ever.

    Every technical hierarchy, in time, develops a competence inversion. --Putt's Corollary

    Putt's Corollary says that in a corporate technocracy, the more technically competent people will remain in charge of the technology, whereas the less competent will be promoted to management. That sounds a lot like The Peter Principle (another timeless classic written in 1969).

    People rise to their level of incompetence. --Dave's Summary of the Peter Principle

    I can tell you that managers have the least information about technical issues and they should be the last people making technical decisions. Period. I've often heard that managers are used as the arbiters of technical debates. Bad idea. Arbiters should always be the [[benevolent dictators]] (the most admired/revered technologist you have). The exception is when your manager is also your benevolent dictator, which is rare. Few humans have the capability, or time, for both.

    I see more and more hit-and-run managers where I work. They feel as though they are the technical decision-makers. They attend technical meetings they were not invited to. Then they ask pointless, irrelevant questions that suck the energy out of the team. Then they want status updates hourly. Eventually after they have totally derailed the process they move along to some other, sexier problem with more management visibility.

    I really admire managers who follow the MBWA ( management by walking around ) principle. This management philosophy is very simple...the best managers are those who leave their offices and observe. By observing they learn what the challenges are for their teams and how to help them better.

    So, what I am looking for in a manager

    1. He knows he is the least qualified person to make a technical decision.
    2. He is a facilitator. He knows how to help his technologists succeed.
    3. MBWA

    [Nov 05, 2018] Why the Peter Principle Works

    Notable quotes:
    "... The Corner Office ..."
    Aug 15, 2011 | www.cbsnews.com
    Why The Peter Principle Works Everyone's heard of the Peter Principle - that employees tend to rise to their level of incompetence - a concept that walks that all-too-fine line between humor and reality.

    We've all seen it in action more times than we'd like. Ironically, some percentage of you will almost certainly be promoted to a position where you're no longer effective. For some of you, that's already happened. Sobering thought.

    Well, here's the thing. Not only is the Peter Principle alive and well in corporate America, but contrary to popular wisdom, it's actually necessary for a healthy capitalist system. That's right, you heard it here, folks, incompetence is a good thing. Here's why.

    Robert Browning once said, "A man's reach should exceed his grasp." It's a powerful statement that means you should seek to improve your situation, strive to go above and beyond. Not only is that an embodiment of capitalism, but it also leads directly to the Peter Principle because, well, how do you know when to quit?

    Now, most of us don't perpetually reach for the stars, but until there's clear evidence that we're not doing ourselves or anyone else any good, we're bound to keep right on reaching. After all, objectivity is notoriously difficult when opportunities for a better life are staring you right in the face.

    I mean, who turns down promotions? Who doesn't strive to reach that next rung on the ladder? When you get an email from an executive recruiter about a VP or CEO job, are you likely to respond, "Sorry, I think that may be beyond my competency" when you've got to send two kids to college and you may actually want to retire someday?

    Wasn't America founded by people who wanted a better life for themselves and their children? God knows, there were plenty of indications that they shouldn't take the plunge and, if they did, wouldn't succeed. That's called a challenge and, well, do you ever really know if you've reached too far until after the fact?

    Perhaps the most interesting embodiment of all this is the way people feel about CEOs. Some think pretty much anyone can do a CEO's job for a fraction of the compensation. Seriously, you hear that sort of thing a lot, especially these days with class warfare being the rage and all.

    One The Corner Office reader asked straight out in an email: "Would you agree that, in most cases, the company could fire the CEO and hire someone young, smart, and hungry at 1/10 the salary/perks/bonuses who would achieve the same performance?"

    Sure, it's easy: you just set the direction, hire a bunch of really smart executives, then get out of the way and let them do their jobs. Once in a blue moon you swoop in, deal with a problem, then return to your ivory tower. Simple.

    Well, not exactly.

    You see, I sort of grew up at Texas Instruments in the 80s when the company was nearly run into the ground by Mark Shepherd and J. Fred Bucy - two CEOs who never should have gotten that far in their careers.

    But the company's board, in its wisdom, promoted Jerry Junkins and, after his untimely death, Tom Engibous , to the CEO post. Not only were those guys competent, they revived the company and transformed it into what it is today.

    I've seen what a strong CEO can do for a company, its customers, its shareholders, and its employees. I've also seen the destruction the Peter Principle can bring to those same stakeholders. But, even now, after 30 years of corporate and consulting experience, the one thing I've never seen is a CEO or executive with an easy job.

    That's because there's no such thing. And to think you can eliminate incompetency from the executive ranks when it exists at every organizational level is, to be blunt, childlike or Utopian thinking. It's silly and trite. It doesn't even make sense.

    It's not as if TI's board knew ahead of time that Shepherd and Bucy weren't the right guys for the job. They'd both had long, successful careers at the company. But the board did right the ship in time. And that's the mark of a healthy system at work.

    The other day I read a truly fantastic story in Fortune about the rise and fall of Jeffrey Kindler as CEO of troubled pharmaceutical giant Pfizer . I remember when he suddenly stepped down amidst all sorts of rumor and conjecture about the underlying causes of the shocking news.

    What really happened is the guy had a fabulous career as a litigator, climbed the corporate ladder to general ounsel of McDonald's and then Pfizer, had some limited success in operations, and once he was promoted to CEO, flamed out. Not because he was incompetent - he wasn't. And certainly not because he was a dysfunctional, antagonistic, micromanaging control freak - he was.

    He failed because it was a really tough job and he was in over his head. It happens. It happens a lot. After all, this wasn't just some everyday company that's simple to run. This was Pfizer - a pharmaceutical giant with its top products going generic and a dried-up drug pipeline in need of a major overhaul.

    The guy couldn't handle it. And when executives with issues get in over their heads, their issues become their undoing. It comes as no surprise that folks at McDonald's were surprised at the way he flamed out at Pfizer. That was a whole different ballgame.

    Now, I bet those same people who think a CEO's job is a piece of cake will have a similar response to the Kindler situation at Pfizer. Why take the job if he knew he couldn't handle it? The board should have canned him before it got to that point. Why didn't the guy's executives speak up sooner?

    Because, just like at TI, nobody knows ahead of time if people are going to be effective on the next rung of the ladder. Every situation is unique and there are no questions or test that will foretell the future. I mean, it's not as if King Solomon comes along and writes who the right guy for the job is on the wall.

    The Peter Principle works because, in a capitalist system, there are top performers, abysmal failures, and everything in between. Expecting anything different when people must reach for the stars to achieve growth and success so our children have a better life than ours isn't how it works in the real world.

    The Peter Principle works because it's the yin to Browning's yang, the natural outcome of striving to better our lives. Want to know how to bring down a free market capitalist system? Don't take the promotion because you're afraid to fail.

    [Nov 05, 2018] Putt's Law, Peter Principle, Dilbert Principle of Incompetence Parkinson's Law

    Nov 05, 2018 | asmilingassasin.blogspot.com

    Putt's Law, Peter Principle, Dilbert Principle of Incompetence & Parkinson's Law

    June 10, 2015 Putt's Law, Peter Principle, Dilbert Principle of Incompetence & Parkinson's Law I am a big fan of Scott Adams & Dilbert Comic Series. I realize that these laws and principles - the Putt's law, Peter Principle, the Dilbert Principle, and Parkinson's Law - aren't necessarily founded in reality. It's easy to look at a manager's closed doors and wonder he or she does all day, if anything. But having said that and having come to realize the difficulty and scope of what management entails. It's hard work and requires a certain skill-set that I'm only beginning to develop. One should therefore look at these principles and laws with an acknowledgment that they most likely developed from the employee's perspective, not the manager's. Take with a pinch of salt!
    Source: Google Images
    The Putt's law: · Putt's Law: " Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand. " · Putt's Corollary: " Every technical hierarchy, in time, develops a competence inversion. " with incompetence being "flushed out of the lower levels" of a technocratic hierarchy, ensuring that technically competent people remain directly in charge of the actual technology while those without technical competence move into management. The Peter Principle: The Peter Principle states that " in a hierarchy every employee tends to rise to his level of incompetence." In other words, employees who perform their roles with competence are promoted into successively higher levels until they reach a level at which they are no longer competent. There they remain. For example, let's say you are a brilliant programmer. You spend your days coding with amazing efficiency and prowess. After a couple of years, you're promoted to lead programmer, and then promoted to team manager. You may have no interest in managing other programmers, but it's the reward for your competence. There you sit -- you have risen to a level of incompetence. Your technical skills lie dormant while you fill your day with one-on-one meetings, department strategy meetings, planning meetings, budgets, and reports. The Dilbert Principle The principle states that companies tend to promote the most incompetent employees to management as a form of damage control . The principle argues that leaders, specifically those in middle management, are in reality the ones that have little effect on productivity. In order to limit the harm caused by incompetent employees who are actually not doing the work, companies make them leaders. The Dilbert Principle assumes that "the majority of real, productive work in a company is done by people lower in the power ladder." Those in management don't actually do anything to move forward the work. How it happens? The Incompetent Leader Stereotype often hits new leaders, specifically those who have no prior experience in a particular field. Often times, leaders who have been transferred from other departments are viewed as mere figureheads, rather than actual leaders who have knowledge of the work situation. Failure to prove technical capability can also lead to a leader being branded incompetent. Why it's bad? Being a victim of the incompetent leader stereotype is bad. Firstly, no one takes you seriously. Your ability to insert input into projects is hampered when your followers actively disregard anything you say as fluff. This is especially true if you are in middle management, where your power as a leader is limited. Secondly, your chances of rising ranks are curtailed. If viewed as an incompetent leader by your followers, your superiors are unlikely to entrust you with further projects which have more impact. How to get over it Know when to concede. As a leader, no one expects you to be competent in every area; though basic knowledge of every section you are leading is necessary. Readily admitting incompetency in certain areas will take out the impact out of it when others paint you as incompetent. Prove competency somewhere. Quickly establish yourself as having some purpose in the workplace, rather than being a mere picture of tokenism. This can be done by personally involving yourself in certain projects. Parkinson's Law Parkinson's Law states that " work expands so as to fill the time available for its completion ." Although this law has application with procrastination, storage capacity, and resource usage, Parkinson focuses his law on Corporate lethargy. Parkinson says that lethargy swell for two reasons: (1) "A manager wants to multiply subordinates, not rivals" and (2) "Managers make work for each other." In other words, a team size may swell not because the workload increases, but because they have the capacity and resources that allow for an increased workload even if the workload does not in fact increase. People without any work find ways to increase the amount of "work" and therefore add to the size of their lethargy. My Analysis I know none of these principles or laws gives much credit to management. The wrong person fills the wrong role, the role exists only to minimize damage control, or the role swells unnecessarily simply because it can. I find the whole topic of management somewhat fascinating, not because I think these theories apply to my own managers. These management theories are however relevant. Software coders looking to leverage coding talent for their projects often find themselves in management roles, without a strong understanding of how to manage people. Most of the time, these coders fail to engage. The project leaders are usually brilliant at their technical job but don't excel at management.
    However the key principle to follow should be this: put individuals to work in their core competencies . It makes little sense to take your most brilliant engineer and have him or her manage people and budgets. Likewise, it makes no sense to take a shrewd consultant, one who can negotiate projects and requirements down to the minutest detail, and put that individual into a role involving creative design and content generation. However, to implement this model, you have to allow for reward without a dramatic change in job responsibilities or skills.

    [Nov 04, 2018] Archibald Putt The Unknown Technocrat Returns - IEEE Spectrum

    Nov 04, 2018 | spectrum.ieee.org

    While similar things can, and do, occur in large technical hierarchies, incompetent technical people experience a social pressure from their more competent colleagues that causes them to seek security within the ranks of management. In technical hierarchies, there is always the possibility that incompetence will be rewarded by promotion.

    Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation."

    Now Putt has revised and updated his short, smart book, to be released in a new edition by Wiley-IEEE Press ( http://www.wiley.com/ieee ) at the end of this month. There have been murmurings that Putt's identity, the subject of much rumormongering, will be revealed after the book comes out, but we think that's unlikely. How much more interesting it is to have an anonymous chronicler wandering the halls of the tech industry, codifying its unstated, sometimes bizarre, and yet remarkably consistent rules of behavior.

    This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation.

    [Nov 04, 2018] Two Minutes on Hiring by Eric Samuelson

    Notable quotes:
    "... Eric Samuelson is the creator of the Confident Hiring System™. Working with Dave Anderson of Learn to Lead, he provides the Anderson Profiles and related services to clients in the automotive retail industry as well as a variety of other businesses. ..."
    Nov 04, 2018 | www.andersonprofiles.com

    In 1981, an author in the Research and Development field, writing under the pseudonym Archibald Putt, penned this famous quote, now known as Putt's Law:

    "Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand."

    Have you ever hired someone without knowing for sure if they can do the job? Have you promoted a good salesperson to management only to realize you made a dire mistake? The qualities needed to succeed in a technical field are quite different than for a leader.

    The legendary immigrant engineer Charles Steinmetz worked at General Electric in the early 1900s. He made phenomenal advancements in the field of electric motors. His work was instrumental to the growth of the electric power industry. With a goal of rewarding him, GE promoted him to a management position, but he failed miserably. Realizing their error, and not wanting to offend this genius, GE's leadership retitled him as a Chief Engineer, with no supervisory duties, and let him go back to his research.

    Avoid the double disaster of losing a good worker by promoting him to management failure. By using the unique Anderson Position Overlay system, you can avoid future regret by comparing your candidate's qualities to the requirements of the position before saying "Welcome Aboard".

    Eric Samuelson is the creator of the Confident Hiring System™. Working with Dave Anderson of Learn to Lead, he provides the Anderson Profiles and related services to clients in the automotive retail industry as well as a variety of other businesses.

    [Nov 04, 2018] Putt's Law and the Successful Technocrat

    Nov 04, 2018 | en.wikipedia.org

    From Wikipedia, the free encyclopedia Jump to navigation Jump to search

    Question book-new.svg This article relies too much on references to primary sources . Please improve this by adding secondary or tertiary sources . (January 2015) ( Learn how and when to remove this template message )
    Putt's Law and the Successful Technocrat
    Putt's Law and the Successful Technocrat cover.jpg
    Author Archibald Putt (pseudonym)
    Illustrator Dennis Driscoll
    Country United States
    Language English
    Genre Industrial Management
    Publisher Wiley-IEEE Press
    Publication date 28 April 2006
    Media type Print ( hardcover )
    Pages 171 pages
    ISBN 0-471-71422-4
    OCLC 68710099
    Dewey Decimal 658.22
    LC Class HD31 .P855 2006

    Putt's Law and the Successful Technocrat is a book, credited to the pseudonym Archibald Putt, published in 1981. An updated edition, subtitled How to Win in the Information Age , was published by Wiley-IEEE Press in 2006. The book is based upon a series of articles published in Research/Development Magazine in 1976 and 1977.

    It proposes Putt's Law and Putt's Corollary [1] which are principles of negative selection similar to The Dilbert principle by Scott Adams proposed in the 1990s. Putt's law is sometimes grouped together with the Peter principle , Parkinson's Law and Stephen Potter 's Gamesmanship series as "P-literature". [2]

    Contents Putt's Law [ edit ]

    The book proposes Putt's Law and Putt's Corollary

    See also [ edit ] References [ edit ]
    1. Jump up ^ Archibald Putt. Putt's Law and the Successful Technocrat: How to Win in the Information Age , Wiley-IEEE Press (2006), ISBN 0-471-71422-4 . Preface.
    2. Jump up ^ John Walker (October 1981). "Review of Putt's Law and the Successful Technocrat " . New Scientist : 52.
    3. ^ Jump up to: a b Archibald Putt. Putt's Law and the Successful Technocrat: How to Win in the Information Age , Wiley-IEEE Press (2006), ISBN 0-471-71422-4 . page 7.
    External links [ edit ]

    [Nov 03, 2018] David Both

    Jun 22, 2017 | opensource.com
    ...

    The long listing of the /lib64 directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link.

    Hard links

    In An introduction to Linux's EXT4 filesystem , I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. Figure 2 in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link.

    In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde ( ~ ) convention for the home directory, so that ~ is equivalent to /home/user in this example. Note that the fourth directory entry is in a completely different directory, /home/shared , which might be a location for sharing files between users of the computer.

    fig1directory_entries.png Figure 1

    Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case /home . This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, /var or /opt , will have inodes with the same number as the inode for our file.

    Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single file/inode located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory.

    The number of hard links for a file is displayed with the ls -l command. If you want to display the actual inode numbers, the command ls -li does that.

    Symbolic (soft) links

    The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs.

    The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the ls command highlights broken links with flashing white text on a red background in a long listing.

    Lab project: experimenting with links

    I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a non-root user . I created the ~/temp directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there.

    Initial setup

    First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command.

    mkdir temp
    

    Change into ~/temp to make it the PWD with this command.

    cd temp
    

    To get started, we need to create a file we can link to. The following command does that and provides some content as well.

    du -h > main.file.txt
    

    Use the ls -l long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two.

    [ dboth @ david temp ] $ ls -l
    total 4
    -rw-rw-r-- 1 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file.

    Experimenting with hard links

    Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still ~/temp . Create a hard link to the file main.file.txt , then do another long list of the directory.

    [ dboth @ david temp ] $ ln main.file.txt link1.file.txt
    [ dboth @ david temp ] $ ls -l
    total 8
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: link1.file.txt or main.file.txt .

    [ dboth @ david temp ] $ ln link1.file.txt link2.file.txt ; ls -l
    total 16
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that each new hard link in this directory must have a different name because two files -- really directory entries -- cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones.

    [ dboth @ david temp ] $ ln main.file.txt link2.file.txt
    ln: failed to create hard link 'link2.file.txt' : File exists

    Clearly that does not work, because link2.file.txt already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far.

    [ dboth @ david temp ] $ ln main.file.txt .. / main.file.txt ; ls -l .. / main *
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt

    The ls command in the above listing shows that the main.file.txt file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links -- directory entries -- to the same inode. To help illustrate the next point, add a file that is not a link.

    [ dboth @ david temp ] $ touch unlinked.file ; ls -l
    total 12
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Look at the inode number of the hard links and that of the new file using the -i option to the ls command.

    [ dboth @ david temp ] $ ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice the number 657024 to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the -i option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system.

    Let's change the size of one of the hard-linked files.

    [ dboth @ david temp ] $ df -h > link2.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries.

    I know this next experiment will work on my computer because my /tmp directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment.

    Try to create a link to one of the files in your ~/temp directory in /tmp (or wherever your different filesystem directory is located).

    [ dboth @ david temp ] $ ln link2.file.txt / tmp / link3.file.txt
    ln: failed to create hard link '/tmp/link3.file.txt' = > 'link2.file.txt' :
    Invalid cross-device link

    Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem.

    There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the ls -li command. Then you can use the find command to locate all links with that inode number.

    [ dboth @ david temp ] $ find . -inum 657024
    . / main.file.txt
    . / link1.file.txt
    . / link2.file.txt

    Note that the find command did not find all four of the hard links to this inode because we started at the current directory of ~/temp . The find command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search.

    [ dboth @ david temp ] $ find ~ -samefile main.file.txt
    / home / dboth / temp / main.file.txt
    / home / dboth / temp / link1.file.txt
    / home / dboth / temp / link2.file.txt
    / home / dboth / main.file.txt

    You may see error messages if you do not have permissions as a non-root user. This command also uses the -samefile option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links.

    Experimenting with soft links

    As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important.

    Let's start by creating a symlink in our ~/temp directory to start our exploration.

    [ dboth @ david temp ] $ ln -s link2.file.txt link3.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The hard links, those that have the inode number 657024 , are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number 658270 . The soft link named link3.file.txt points to link2.file.txt . Use the cat command to display the contents of link3.file.txt . The file mode information for the symlink starts with the letter " l " which indicates that this file is actually a symbolic link.

    The size of the symlink link3.file.txt is only 14 bytes in the example above. That is the size of the text link3.file.txt -> link2.file.txt , which is the actual content of the directory entry. The directory entry link3.file.txt does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the /tmp directory.

    [ dboth @ david temp ] $ ln -s / home / dboth / temp / link2.file.txt
    / tmp / link3.file.txt ; ls -l / tmp / link *
    lrwxrwxrwx 1 dboth dboth 31 Jun 14 21 : 53 / tmp / link3.file.txt - >
    / home / dboth / temp / link2.file.txt Deleting links

    There are some other things that you should consider when you need to delete links or the files to which they point.

    First, let's delete the link main.file.txt . Remember that every directory entry that points to an inode is simply a hard link.

    [ dboth @ david temp ] $ rm main.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The link main.file.txt was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links.

    Now delete the link2.file.txt hard link.

    [ dboth @ david temp ] $ rm link2.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the rm command.

    The unlink command can also be used to delete files and links. It is very simple and has no options, as the rm command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link -- the directory entry -- to the file being deleted.

    Final thoughts

    I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve. David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. dgrb on 23 Jun 2017 Permalink There is a hard link "gotcha" which IMHO is worth mentioning.

    If you use an editor which makes automatic backups - emacs certainly is one such - then you may end up with a new version of the edited file, while the backup is the linked copy, because the editor simply renames the file to the backup name (with emacs, test.c would be renamed test.c~) and the new version when saved under the old name is no longer linked.

    Symbolic links avoid this problem, so I tend to use them for source code where required.

    [Nov 03, 2018] Neoliberal Measurement Mania

    Highly recommended!
    Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand. -- Archibald Putt
    Neoliberal PHBs like talk about KJLOCs, error counts, tickets closed and other types of numerical measurements designed so that they can be used by lower-level PHBs to report fake results to higher level PHBs. These attempts to quantify 'the quality' and volume of work performed by software developers and sysadmins completely miss the point. For software is can lead to code bloat.
    The number of tickets taken and resolved in a specified time period probably the most ignorant way to measure performance of sysadmins. For sysadmin you can invent creative creating way of generating and resolving tickets. And spend time accomplishing fake task, instead of thinking about real problem that datacenter face. Using Primitive measurement strategies devalue the work being performed by Sysadmins and programmers. They focus on the wrong things. They create the boundaries that are supposed to contain us in a manner that is comprehensible to the PHB who knows nothing about real problems we face.
    Notable quotes:
    "... Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand. ..."
    Nov 03, 2018 | www.rako.com

    In an advanced research or development project, success or failure is largely determined when the goals or objectives are set and before a manager is chosen. While a hard-working and diligent manager can increase the chances of success, the outcome of the project is most strongly affected by preexisting but unknown technological factors over which the project manager has no control. The success or failure of the project should not, therefore, be used as the sole measure or even the primary measure of the manager's competence.

    Putt's Law Is promulgated

    Without an adequate competence criterion for technical managers, there is no way to determine when a person has reached his level of incompetence. Thus a clever and ambitious individual may be promoted from one level of incompetence to another. He will ultimately perform incompetently in the highest level of the hierarchy just as he did in numerous lower levels. The lack of an adequate competence criterion combined with the frequent practice of creative incompetence in technical hierarchies results in a competence inversion, with the most competent people remaining near the bottom while persons of lesser talent rise to the top. It also provides the basis for Putt's Law, which can be stated in an intuitive and nonmathematical form as follows:

    Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.

    As in any other hierarchy, the majority of persons in technology neither understand nor manage much of anything. This, however, does not create an exception to Putt's Law, because such persons clearly do not dominate the hierarchy. While this was not previously stated as a basic law, it is clear that the success of every technocrat depends on his ability to deal with and benefit from the consequences of Putt's Law.

    [Nov 03, 2018] Archibald Putt The Unknown Technocrat Returns - IEEE Spectrum

    Notable quotes:
    "... Who is Putt? Well, for those of you under 40, the pseudonymous Archibald Putt, Ph.D., penned a series of articles for Research/Development magazine in the 1970s that eventually became the 1981 cult classic Putt's Law and the Successful Technocrat , an unorthodox and archly funny how-to book for achieving tech career success. ..."
    "... His first law, "Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand," along with its corollary, "Every technical hierarchy, in time, develops a competence inversion," have been immortalized on Web sites around the world. ..."
    "... what's a competence inversion? It means that the best and the brightest in a technology company tend to settle on the lowest rungs of the corporate ladder -- where things like inventing and developing new products get done -- while those who manage what they cannot hope to make or understand float to the top (see Putt's first law, above, and a fine example of Putt's law in action in the editorial, " Is Bad Design a Nuisance? "). ..."
    "... Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation." ..."
    "... This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation. ..."
    "... n.hantman@ieee.org ..."
    Nov 03, 2018 | spectrum.ieee.org

    If you want to jump-start your technology career, put aside your Peter Drucker, your Tom Peters, and your Marcus Buckingham management tomes. Archibald Putt is back.

    Who is Putt? Well, for those of you under 40, the pseudonymous Archibald Putt, Ph.D., penned a series of articles for Research/Development magazine in the 1970s that eventually became the 1981 cult classic Putt's Law and the Successful Technocrat , an unorthodox and archly funny how-to book for achieving tech career success.

    In the book, Putt put forth a series of laws and axioms for surviving and succeeding in the unique corporate cultures of big technology companies, where being the builder of the best technology and becoming the top dog on the block almost never mix. His first law, "Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand," along with its corollary, "Every technical hierarchy, in time, develops a competence inversion," have been immortalized on Web sites around the world.

    The first law is obvious, but what's a competence inversion? It means that the best and the brightest in a technology company tend to settle on the lowest rungs of the corporate ladder -- where things like inventing and developing new products get done -- while those who manage what they cannot hope to make or understand float to the top (see Putt's first law, above, and a fine example of Putt's law in action in the editorial, " Is Bad Design a Nuisance? ").

    Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation."

    Now Putt has revised and updated his short, smart book, to be released in a new edition by Wiley-IEEE Press ( http://www.wiley.com/ieee ) at the end of this month. There have been murmurings that Putt's identity, the subject of much rumormongering, will be revealed after the book comes out, but we think that's unlikely. How much more interesting it is to have an anonymous chronicler wandering the halls of the tech industry, codifying its unstated, sometimes bizarre, and yet remarkably consistent rules of behavior.

    This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation.

    The editorial content of IEEE Spectrum does not represent official positions of the IEEE or its organizational units. Please address comments to Forum at n.hantman@ieee.org .

    [Nov 03, 2018] Technology is dominated by two types of people; those who understand what they don t manage; and those who manage what they don t understand – ARCHIBALD PUTT ( PUTTS LAW )

    Notable quotes:
    "... These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. ..."
    "... IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. ..."
    Nov 03, 2018 | brummieruss.wordpress.com

    ...Cloud introduces a whole new ball game and will no doubt perpetuate Putts Law for ever more. Why?

    Well unless 100% of IT infrastructure goes up into the clouds ( unlikely for any organization with a history ; likely for a new organization ( probably micro small ) that starts up in the next few years ) the 'art of IT management' will demand even more focus and understanding.

    I always think a great acid test of Putts Law is to look at one of the two aspects of IT management

    1. Show me a simple process that you follow each day that delivers an aspect of IT service i.e. how to buy a piece of IT stuff, or a way to report a fault
    2. Show me how you manage a single entity on the network i.e. a file server, a PC, a network switch

    Usually the answers ( which will be different from people on the same team, in the same room and from the same person on different days !) will give you an insight to Putts Law.

    Childs play for most of course who are challenged with some real complex management situations such as data center virtualization projects, storage explosion control, edge device management, backend application upgrades, global messaging migrations and B2C identity integration. But of course if its evidenced that they seem to be managing (simple things ) without true understanding one could argue 'how the hell can they be expected to manage what they understand with the complex things?' Fair point?

    Of course many C level people have an answer to Putts Law. Move the problem to people who do understand what they manage. Professionals who provide cloud versions of what the C level person struggles to get a professional service from. These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. And they are right ( and wrong ).

    ... ... ...

    ( Quote attributed to Archibald Putt author of Putt's Law and the Successful Technocrat: How to Win in the Information Age )

    rowan says: March 9, 2012 at 9:03 am

    IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. Understanding inventory, disk space, security etc is one thing; but understanding the performance of apps and user impact is another ball game. Putts Law is alive and well in my organisation. TGIF.

    Rowan in Belfast.

    stephen777 says: March 31, 2012 at 7:32 am

    Rowan is right I used to be an IT Manager but now my title is Service Delivery Manager. Why? Because we had a new CTO who changed how people saw what we did. I ve been doing this new role for 5 years and I really do understand what i don't manage. LOL

    Stephen777

    [Nov 03, 2018] David Both

    Nov 03, 2018 | opensource.com

    Feed 161 up 4 comments Links Image by : Paul Lewin . Modified by Opensource.com. CC BY-SA 2.0 x Get the newsletter

    Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    An introduction to Linux's EXT4 filesystem ; Managing devices in Linux ; An introduction to Linux filesystems ; and A Linux user's guide to Logical Volume Management , I have briefly mentioned an interesting feature of Linux filesystems that can make some tasks easier by providing access to files from multiple locations in the filesystem directory tree.

    There are two types of Linux filesystem links: hard and soft. The difference between the two types of links is significant, but both types are used to solve similar problems. They both provide multiple directory entries (or references) to a single file, but they do it quite differently. Links are powerful and add flexibility to Linux filesystems because everything is a file .

    More Linux resources

    I have found, for instance, that some programs required a particular version of a library. When a library upgrade replaced the old version, the program would crash with an error specifying the name of the old, now-missing library. Usually, the only change in the library name was the version number. Acting on a hunch, I simply added a link to the new library but named the link after the old library name. I tried the program again and it worked perfectly. And, okay, the program was a game, and everyone knows the lengths that gamers will go to in order to keep their games running.

    In fact, almost all applications are linked to libraries using a generic name with only a major version number in the link name, while the link points to the actual library file that also has a minor version number. In other instances, required files have been moved from one directory to another to comply with the Linux file specification, and there are links in the old directories for backwards compatibility with those programs that have not yet caught up with the new locations. If you do a long listing of the /lib64 directory, you can find many examples of both.

    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm
    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd
    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi
    lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0
    -rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0
    lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0
    -rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0
    lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1
    -rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0
    -rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1
    lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26
    -rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26
    lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26

    A few of the links in the /lib64 directory

    The long listing of the /lib64 directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link.

    Hard links

    In An introduction to Linux's EXT4 filesystem , I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. Figure 2 in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link.

    In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde ( ~ ) convention for the home directory, so that ~ is equivalent to /home/user in this example. Note that the fourth directory entry is in a completely different directory, /home/shared , which might be a location for sharing files between users of the computer.

    fig1directory_entries.png Figure 1

    Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case /home . This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, /var or /opt , will have inodes with the same number as the inode for our file.

    Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single file/inode located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory.

    The number of hard links for a file is displayed with the ls -l command. If you want to display the actual inode numbers, the command ls -li does that.

    Symbolic (soft) links

    The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs.

    The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the ls command highlights broken links with flashing white text on a red background in a long listing.

    Lab project: experimenting with links

    I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a non-root user . I created the ~/temp directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there.

    Initial setup

    First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command.

    mkdir temp
    

    Change into ~/temp to make it the PWD with this command.

    cd temp
    

    To get started, we need to create a file we can link to. The following command does that and provides some content as well.

    du -h > main.file.txt
    

    Use the ls -l long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two.

    [ dboth @ david temp ] $ ls -l
    total 4
    -rw-rw-r-- 1 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file.

    Experimenting with hard links

    Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still ~/temp . Create a hard link to the file main.file.txt , then do another long list of the directory.

    [ dboth @ david temp ] $ ln main.file.txt link1.file.txt
    [ dboth @ david temp ] $ ls -l
    total 8
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: link1.file.txt or main.file.txt .

    [ dboth @ david temp ] $ ln link1.file.txt link2.file.txt ; ls -l
    total 16
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 main.file.txt