Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Enterprise Linux sickness with overcomplexity:
slightly skeptical view on enterprise Linux distributions

News

Slightly Skeptical View on Enterprise Unix Administration

Recommended Books

Recommended Links Open source politics: IBM acquires Red Hat Backup and Restore LPI Certification
Oracle Linux Red Hat Suse Ubuntu Systemd invasion into Linux Server space Simple Unix Backup Tools Troubleshooing
Network Utilities ifconfig ethtool netstat Grub route Nmap
The Linux Logical Volume Manager Linux Disk Management Snapshots Virtual memory Linux Disk Management Startup and Shutdown Linux Multipath
Linux Run Levels Configuring serial console Root password recovery Booting into Rescue Mode Enterprise Job schedulers Grub Log Rotation
Linux Networking Linux Performance Tuning Biosdevname and renaming of Ethernet interfaces IPMI Is DevOps a yet another "for profit" technocult? Dell DRAC Security
Working with ISO Images in Linux Ministributions Admin Humor Admin Horror Stories Linux Tips Humor Etc

Introduction

Image a language in which both grammar and vocabulary is changing each three to five years.  And both are so huge that are beyond any normal human comprehension. You can learn some subset of both vocabulary and grammar when you closely work with a particular subsystem for several months in a row, only to forget it after a couple of months or quarters.  The classic example here is RHEL kickstart. 

In a sense all talks about linux security is a joke as you can't secure the OS, which is far, far beyond your ability to comprehend. So state-sponsored hackers will always have an edge in breaking into linux.

Linux became two complex to master for a single person. Now this is yet another monstrous OS, that nobody know well (as the level completely puts it far above mere mortal capabilities)   And that's the problem. Both Red Hat and Suse are now software development companies that can be called "overcomplexity junks". And it shows in their recent products.  Actually SLES is even worse then RHEL in this respect, despite being (originally) a German distribution. 

Generally in Linux administration (like previously in enterprise Unix administration) you get what you paid for. Nothing can replace multi-year experience, and experience often is acquired by making expensive mistakes (see Admin Horror Stories). Vendor training is expensive and is more or less available only to sysadmin in few industries (financial industry is one). For Red Hat we have the situation that closely resembles the situation well know from Solaris: training is rather good, but prices are exorbitant.

Due to the current complexity (or, more correctly, overcomplexity) of Linux environments most sysadmins can master it well only for commonly used subsystems and for just one flavor of Linux. Better one might be able to  support two (with highly asymmetrical level of skills, being usually considerably more proficient in one flavor over the other). In other words Unix wars are now replaced on Linux turf with vengeance.

The level of mental overload and frustration from the overcomplexity of two major enterprise Linux flavors (RHEL and SLES) is such that people are ready for a change. Note that in OS ecosystem there is a natural tendency toward monopoly -- nothing succeed like success and the critical mass of installation that those two "monstrously complex" Linux distribution hold prevent any escape. Especially in enterprise environment.  Red Hat can essentially dictate what linux should be -- as it did with incorporating systemd in RHEL 7. 

Still there is a large difference between RHEL and SLES popularity: 

Ubuntu -- a dumped-down Linux based on Debian, with some strange design decisions -- is now getting some corporate sales, especially in cloud environment,  the expense of Suse. It still mainly desktop OS but it gradually acquires some enterprise share two.  That makes the number of enterprise linux distribution close to what we used to have in commercial Unix space (Solaris, AIX and HP-UX) and Debian and Ubuntu playing  the role of Solaris.   

Troubles with SElinux

SLES until recently was slightly simpler then RHEL, as it did not include horribly complex security subsystem that RHEL uses -- SELinux.  It takes a lot of efforts to learn even basics of SELinux and configure properly one facing Internet server. Most sysadmin just use it blindly iether enabling it and disabling it without understanding any details of its functioning (or, more correctly, understanding it on the level allowing them to use common protocols, much like is the case with firewalls)

Actually it has a better solution in Linux-space used in SLES (AppArmor). Which was pretty elegant solution to a complex problem, if you ask me. But the critical mass of installation and m,arket share secured by Red Hat, made it "king of the hill" and prevented AppArmor from becoming Linux standard. A the result SUSE was forced to incorporate SELinux.

SELinux provides a Mandatory Access Control (MAC) system built into the Linux kernel (that is staff that labels things as "super secret", "secret" and "confidential" that three letter agencies are using to guard information). Historically Security Enhanced Linux (SELinux) was an open source project sponsored by the National Security Agency. Despite the user-friendly GUI, SELinux is difficult to configure and hard to understand. The documentation does not help much either. Most administrators are just turning SELinux subsystem off during the initial install but for Internet facing server you need to configure and use it, or...   And sometimes effects can be really subtle: for example you can login as root using password authentication but can't using passwordless ssh certificate.  That's why many complex applications, especially in HPC area explicidly recommend disabling SElinux as a starting point of installation. You can find articles on the WEB devoted to this topic. See for example

SELinux produces some very interesting errors, see for example http://bugs.mysql.com/bug.php?id=12676 and is not very compatible with some subsystems and complex applications.  Especially telling is the comment to the this blog post How to disable SELinux in RHEL 5:

Aeon said... @ May 13, 2008 2:34 PM
 
Thanks a million! I was dealing with a samba refusing to access the server shared folders. After about 2 hours of scrolling forums I found out the issue may be this shitty thing samba_selinux.

I usually disable it when I install, but this time I had to use the Dell utilities (no choice at all) and they enabled the thing. Disabled it your way, rebooted and it works as I wanted it. Thanks again!

SLES has one significant defect: by default it does not assign each user a unique group like RHEL does. But this can be fixed with a special wrapper for useradd command. In simplest for it can be just:

   #wrapper for useradd command
   # accepts two arguments: UID and user name, for example
   # uadd 3333 joedoers

   function uadd
   {
   groupadd -g $1 $2
   useradd -u $1 -g $1 -m $2
   }

Working closely with commercial Linuxes and seeing all their warts and such, one instantly understand that the traditional Open Source (GPL-based Open Source), is a very problematic business model. Historically (especially in case of Red Hat) is was used as a smoke screen for the VCs to get software engineers to work for free, not even for minimum wage, but for free! And grab as much money from suckers as they can, using all right words as an anesthetic. Essentially they take their hard work, pump $$$ in marketing and either sell the resulting company to one of their other portfolio companies or take it public and dump the shares on the public. Meanwhile the software engineers that worked to develop that software for free, aka slave labor, get $0.00 for their hard work while the VCs top brass of the startup and investment bankers make a killing.

And of course then they get their buddies in mainstream media hype the GPL-based Open Source development as the best thing after sliced bread.

Licensing

RHEL licensing is a mess too. In addition two higher level licenses are expensive and make Microsoft server license look very competitive.  Recently they went "IBM way" and started to change different prices for 4 socket servers: you can't just use two 2 socket licenses to license 4 socket server with their new registration-manager.  The next step will be classic IBM per core licensing; that's why so many people passionately hate IBM.

 There are three different types of licensing (let's call them patch-only, regular and with premium support). Each has several variations (for example HPC computational node is a variant of "patches only" license but does not provide GUI and many packages in repository). The level of tech support with the latter two (which are truly enterprise licenses) is very similar -- similarly dismal -- especially for complex problems, unless you press them really hard. 

In addition Red Hat people screwed their portal so much that you can't tell which server is assigned to what license. that situation improved with registration manger but new problem arise.

Generally the level of screw up of RHEL user portal is such, that there doubts that they can do anything useful in Linux space in the future, other then try to hold to their market share.

All is all while RHEL 6 is very complex but still a usable enterprise Linux distribution because if did not radically changed from RHEL 4, and 5. But it is not fan to use it, anymore. It's a pain. It's a headache. The same is true for SLES.

For RHEL 7 more strong words are applicable. 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999

[Aug 18, 2019] Oracle Linux 7 Update 7 Released

Oracle Linux 7 Update 7 features many of the same changes as RHEL 7.7 but now also adding an updated Unbreakable Enterprise Kernel Release 5 based on Linux 4.14.35 with many extra patches compared to RHEL7's default Linux 3.10 based kernel.
Oracle Linux can be used for free with paid support if needed. So it is more flexible option than Centos. At least for companies who have no money to buy RHEL and are reluctant to run Centos as some of their applications are not supported on it.
Aug 15, 2019 | blogs.oracle.com

Oracle Linux 7 Update 7 ships with the following kernel packages, that include bug fixes, security fixes and enhancements:
•Unbreakable Enterprise Kernel (UEK) Release 5 (kernel-uek-4.14.35-1902.3.2.el7) for x86-64 and aarch64
•Red Hat Compatible Kernel (RHCK) (kernel-3.10.0-1062.el7) for x86-64 only

Oracle Linux maintains user space compatibility with Red Hat Enterprise Linux (RHEL), which is independent of the kernel version that underlies the operating system. Existing applications in user space will continue to run unmodified on Oracle Linux 7 Update 7 with UEK Release 5 and no re-certifications are needed for applications already certified with Red Hat Enterprise Linux 7 or Oracle Linux 7.

[Aug 18, 2019] Hundreds of Thousands of People Are Using Passwords That Have Already Been Hacked, Google Says

Aug 18, 2019 | tech.slashdot.org

(vice.com) 58

Posted by msmash on Friday August 16, 2019 @12:45PM

A new Google study this week confirmed the obvious: internet users need to stop using the same password for multiple websites unless they're keen on having their data hijacked, their identity stolen, or worse. From a report: It seems like not a day goes by without a major company being hacked or leaving user email addresses and passwords exposed to the public internet. These login credentials are then routinely used by hackers to hijack your accounts, a threat that's largely mitigated by using a password manager and unique password for each site you visit. Sites like "have I been pwned?" can help users track if their data has been exposed, and whether they need to worry about their credentials bouncing around the dark web. But it's still a confusing process for many users unsure of which passwords need updating.

To that end, last February Google unveiled a new experimental Password Checkup extension for Chrome . The extension warns you any time you log into a website using one of over 4 billion publicly-accessible usernames and passwords that have been previously exposed by a major hack or breach, and prompts you to change your password when necessary. The extension was built in concert with cryptography experts at Stanford University to ensure that Google never learns your usernames or passwords, the company says in an explainer. Anonymous telemetry data culled from the extension has provided Google with some interesting information on how widespread the practice of account hijacking and non-unique passwords really is.

[Aug 04, 2019] 10 YAML tips for people who hate YAML Enable SysAdmin

Aug 04, 2019 | www.redhat.com

10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain.

Posted June 10, 2019 | by Seth Kenlon (Red Hat)

Image
There are lots of formats for configuration files: a list of values, key and value pairs, INI files, YAML, JSON, XML, and many more. Of these, YAML sometimes gets cited as a particularly difficult one to handle for a few different reasons. While its ability to reflect hierarchical values is significant and its minimalism can be refreshing to some, its Python-like reliance upon syntactic whitespace can be frustrating.

However, the open source world is diverse and flexible enough that no one has to suffer through abrasive technology, so if you hate YAML, here are 10 things you can (and should!) do to make it tolerable. Starting with zero, as any sensible index should.

0. Make your editor do the work

Whatever text editor you use probably has plugins to make dealing with syntax easier. If you're not using a YAML plugin for your editor, find one and install it. The effort you spend on finding a plugin and configuring it as needed will pay off tenfold the very next time you edit YAML.

For example, the Atom editor comes with a YAML mode by default, and while GNU Emacs ships with minimal support, you can add additional packages like yaml-mode to help.

Emacs in YAML and whitespace mode.

If your favorite text editor lacks a YAML mode, you can address some of your grievances with small configuration changes. For instance, the default text editor for the GNOME desktop, Gedit, doesn't have a YAML mode available, but it does provide YAML syntax highlighting by default and features configurable tab width:

Configuring tab width and type in Gedit.

With the drawspaces Gedit plugin package, you can make white space visible in the form of leading dots, removing any question about levels of indentation.

Take some time to research your favorite text editor. Find out what the editor, or its community, does to make YAML easier, and leverage those features in your work. You won't be sorry.

1. Use a linter

Ideally, programming languages and markup languages use predictable syntax. Computers tend to do well with predictability, so the concept of a linter was invented in 1978. If you're not using a linter for YAML, then it's time to adopt this 40-year-old tradition and use yamllint .

You can install yamllint on Linux using your distribution's package manager. For instance, on Red Hat Enterprise Linux 8 or Fedora :

$ sudo dnf install yamllint

Invoking yamllint is as simple as telling it to check a file. Here's an example of yamllint 's response to a YAML file containing an error:

$ yamllint errorprone.yaml
errorprone.yaml
23:10     error    syntax error: mapping values are not allowed here
23:11     error    trailing spaces  (trailing-spaces)

That's not a time stamp on the left. It's the error's line and column number. You may or may not understand what error it's talking about, but now you know the error's location. Taking a second look at the location often makes the error's nature obvious. Success is eerily silent, so if you want feedback based on the lint's success, you can add a conditional second command with a double-ampersand ( && ). In a POSIX shell, && fails if a command returns anything but 0, so upon success, your echo command makes that clear. This tactic is somewhat superficial, but some users prefer the assurance that the command did run correctly, rather than failing silently. Here's an example:

$ yamllint perfect.yaml && echo "OK"
OK

The reason yamllint is so silent when it succeeds is that it returns 0 errors when there are no errors.

2. Write in Python, not YAML

If you really hate YAML, stop writing in YAML, at least in the literal sense. You might be stuck with YAML because that's the only format an application accepts, but if the only requirement is to end up in YAML, then work in something else and then convert. Python, along with the excellent pyyaml library, makes this easy, and you have two methods to choose from: self-conversion or scripted.

Self-conversion

In the self-conversion method, your data files are also Python scripts that produce YAML. This works best for small data sets. Just write your JSON data into a Python variable, prepend an import statement, and end the file with a simple three-line output statement.

#!/usr/bin/python3	
import yaml 

d={
"glossary": {
  "title": "example glossary",
  "GlossDiv": {
	"title": "S",
	"GlossList": {
	  "GlossEntry": {
		"ID": "SGML",
		"SortAs": "SGML",
		"GlossTerm": "Standard Generalized Markup Language",
		"Acronym": "SGML",
		"Abbrev": "ISO 8879:1986",
		"GlossDef": {
		  "para": "A meta-markup language, used to create markup languages such as DocBook.",
		  "GlossSeeAlso": ["GML", "XML"]
		  },
		"GlossSee": "markup"
		}
	  }
	}
  }
}

f=open('output.yaml','w')
f.write(yaml.dump(d))
f.close

Run the file with Python to produce a file called output.yaml file.

$ python3 ./example.json
$ cat output.yaml
glossary:
  GlossDiv:
	GlossList:
	  GlossEntry:
		Abbrev: ISO 8879:1986
		Acronym: SGML
		GlossDef:
		  GlossSeeAlso: [GML, XML]
		  para: A meta-markup language, used to create markup languages such as DocBook.
		GlossSee: markup
		GlossTerm: Standard Generalized Markup Language
		ID: SGML
		SortAs: SGML
	title: S
  title: example glossary

This output is perfectly valid YAML, although yamllint does issue a warning that the file is not prefaced with --- , which is something you can adjust either in the Python script or manually.

Scripted conversion

In this method, you write in JSON and then run a Python conversion script to produce YAML. This scales better than self-conversion, because it keeps the converter separate from the data.

Create a JSON file and save it as example.json . Here is an example from json.org :

{
	"glossary": {
	  "title": "example glossary",
	  "GlossDiv": {
		"title": "S",
		"GlossList": {
		  "GlossEntry": {
			"ID": "SGML",
			"SortAs": "SGML",
			"GlossTerm": "Standard Generalized Markup Language",
			"Acronym": "SGML",
			"Abbrev": "ISO 8879:1986",
			"GlossDef": {
			  "para": "A meta-markup language, used to create markup languages such as DocBook.",
			  "GlossSeeAlso": ["GML", "XML"]
			  },
			"GlossSee": "markup"
			}
		  }
		}
	  }
	}

Create a simple converter and save it as json2yaml.py . This script imports both the YAML and JSON Python modules, loads a JSON file defined by the user, performs the conversion, and then writes the data to output.yaml .

#!/usr/bin/python3
import yaml
import sys
import json

OUT=open('output.yaml','w')
IN=open(sys.argv[1], 'r')

JSON = json.load(IN)
IN.close()
yaml.dump(JSON, OUT)
OUT.close()

Save this script in your system path, and execute as needed:

$ ~/bin/json2yaml.py example.json
3. Parse early, parse often

Sometimes it helps to look at a problem from a different angle. If your problem is YAML, and you're having a difficult time visualizing the data's relationships, you might find it useful to restructure that data, temporarily, into something you're more familiar with.

If you're more comfortable with dictionary-style lists or JSON, for instance, you can convert YAML to JSON in two commands using an interactive Python shell. Assume your YAML file is called mydata.yaml .

$ python3
>>> f=open('mydata.yaml','r')
>>> yaml.load(f)
{'document': 34843, 'date': datetime.date(2019, 5, 23), 'bill-to': {'given': 'Seth', 'family': 'Kenlon', 'address': {'street': '51b Mornington Road\n', 'city': 'Brooklyn', 'state': 'Wellington', 'postal': 6021, 'country': 'NZ'}}, 'words': 938, 'comments': 'Good article. Could be better.'}

There are many other examples, and there are plenty of online converters and local parsers, so don't hesitate to reformat data when it starts to look more like a laundry list than markup.

4. Read the spec

After I've been away from YAML for a while and find myself using it again, I go straight back to yaml.org to re-read the spec. If you've never read the specification for YAML and you find YAML confusing, a glance at the spec may provide the clarification you never knew you needed. The specification is surprisingly easy to read, with the requirements for valid YAML spelled out with lots of examples in chapter 6 .

5. Pseudo-config

Before I started writing my book, Developing Games on the Raspberry Pi , Apress, 2019, the publisher asked me for an outline. You'd think an outline would be easy. By definition, it's just the titles of chapters and sections, with no real content. And yet, out of the 300 pages published, the hardest part to write was that initial outline.

YAML can be the same way. You may have a notion of the data you need to record, but that doesn't mean you fully understand how it's all related. So before you sit down to write YAML, try doing a pseudo-config instead.

A pseudo-config is like pseudo-code. You don't have to worry about structure or indentation, parent-child relationships, inheritance, or nesting. You just create iterations of data in the way you currently understand it inside your head.

A pseudo-config.

Once you've got your pseudo-config down on paper, study it, and transform your results into valid YAML.

6. Resolve the spaces vs. tabs debate

OK, maybe you won't definitively resolve the spaces-vs-tabs debate , but you should at least resolve the debate within your project or organization. Whether you resolve this question with a post-process sed script, text editor configuration, or a blood-oath to respect your linter's results, anyone in your team who touches a YAML project must agree to use spaces (in accordance with the YAML spec).

Any good text editor allows you to define a number of spaces instead of a tab character, so the choice shouldn't negatively affect fans of the Tab key.

Tabs and spaces are, as you probably know all too well, essentially invisible. And when something is out of sight, it rarely comes to mind until the bitter end, when you've tested and eliminated all of the "obvious" problems. An hour wasted to an errant tab or group of spaces is your signal to create a policy to use one or the other, and then to develop a fail-safe check for compliance (such as a Git hook to enforce linting).

7. Less is more (or more is less)

Some people like to write YAML to emphasize its structure. They indent vigorously to help themselves visualize chunks of data. It's a sort of cheat to mimic markup languages that have explicit delimiters.

Here's a good example from Ansible's documentation :

# Employee records
-  martin:
        name: Martin D'vloper
        job: Developer
        skills:
            - python
            - perl
            - pascal
-  tabitha:
        name: Tabitha Bitumen
        job: Developer
        skills:
            - lisp
            - fortran
            - erlang

For some users, this approach is a helpful way to lay out a YAML document, while other users miss the structure for the void of seemingly gratuitous white space.

If you own and maintain a YAML document, then you get to define what "indentation" means. If blocks of horizontal white space distract you, then use the minimal amount of white space required by the YAML spec. For example, the same YAML from the Ansible documentation can be represented with fewer indents without losing any of its validity or meaning:

---
- martin:
   name: Martin D'vloper
   job: Developer
   skills:
   - python
   - perl
   - pascal
- tabitha:
   name: Tabitha Bitumen
   job: Developer
   skills:
   - lisp
   - fortran
   - erlang
8. Make a recipe

I'm a big fan of repetition breeding familiarity, but sometimes repetition just breeds repeated stupid mistakes. Luckily, a clever peasant woman experienced this very phenomenon back in 396 AD (don't fact-check me), and invented the concept of the recipe .

If you find yourself making YAML document mistakes over and over, you can embed a recipe or template in the YAML file as a commented section. When you're adding a section, copy the commented recipe and overwrite the dummy data with your new real data. For example:

---
# - <common name>:
#   name: Given Surname
#   job: JOB
#   skills:
#   - LANG
- martin:
  name: Martin D'vloper
  job: Developer
  skills:
  - python
  - perl
  - pascal
- tabitha:
  name: Tabitha Bitumen
  job: Developer
  skills:
  - lisp
  - fortran
  - erlang
9. Use something else

I'm a fan of YAML, generally, but sometimes YAML isn't the answer. If you're not locked into YAML by the application you're using, then you might be better served by some other configuration format. Sometimes config files outgrow themselves and are better refactored into simple Lua or Python scripts.

YAML is a great tool and is popular among users for its minimalism and simplicity, but it's not the only tool in your kit. Sometimes it's best to part ways. One of the benefits of YAML is that parsing libraries are common, so as long as you provide migration options, your users should be able to adapt painlessly.

If YAML is a requirement, though, keep these tips in mind and conquer your YAML hatred once and for all! What to read next

[Aug 04, 2019] Ansible IT automation for everybody Enable SysAdmin

Aug 04, 2019 | www.redhat.com

Skip to main content We use cookies on our websites to deliver our online services. Details about how we use cookies and how you may disable them are set out in our Privacy Statement . By using this website you agree to our use of cookies. × Search Enable SysAdmin

Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks.

Posted July 31, 2019 | by Jörg Kastning

Image

Ansible is an open source tool for software provisioning, application deployment, orchestration, configuration, and administration. Its purpose is to help you automate your configuration processes and simplify the administration of multiple systems. Thus, Ansible essentially pursues the same goals as Puppet, Chef, or Saltstack.

What I like about Ansible is that it's flexible, lean, and easy to start with. In most use cases, it keeps the job simple.

I chose to use Ansible back in 2016 because no agent has to be installed on the managed nodes -- a node is what Ansible calls a managed remote system. All you need to start managing a remote system with Ansible is SSH access to the system, and Python installed on it. Python is preinstalled on most Linux systems, and I was already used to managing my hosts via SSH, so I was ready to start right away. And if the day comes where I decide not to use Ansible anymore, I just have to delete my Ansible controller machine (control node) and I'm good to go. There are no agents left on the managed nodes that have to be removed.

Ansible offers two ways to control your nodes. The first one uses playbooks . These are simple ASCII files written in Yet Another Markup Language (YAML) , which is easy to read and write. And second, there are the ad-hoc commands , which allow you to run a command or module without having to create a playbook first.

You organize the hosts you would like to manage and control in an inventory file, which offers flexible format options. For example, this could be an INI-like file that looks like:

mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

[site1:children]
webservers
dbservers
Examples

I would like to give you two small examples of how to use Ansible. I started with these really simple tasks before I used Ansible to take control of more complex tasks in my infrastructure.

Ad-hoc: Check if Ansible can remote manage a system

As you might recall from the beginning of this article, all you need to manage a remote host is SSH access to it, and a working Python interpreter on it. To check if these requirements are fulfilled, run the following ad-hoc command against a host from your inventory:

[jkastning@ansible]$ ansible mail.example.com -m ping
mail.example.com | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
Playbook: Register a system and attach a subscription

This example shows how to use a playbook to keep installed packages up to date. The playbook is an ASCII text file which looks like this:

---
# Make sure all packages are up to date
- name: Update your system
  hosts: mail.example.com
  tasks:
  - name: Make sure all packages are up to date
    yum:
      name: "*"
      state: latest

Now, we are ready to run the playbook:

[jkastning@ansible]$ ansible-playbook yum_update.yml 

PLAY [Update your system] **************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [mail.example.com]

TASK [Make sure all packages are up to date] *******************************************************
ok: [mail.example.com]

PLAY RECAP *****************************************************************************************
mail.example.com : ok=2    changed=0    unreachable=0    failed=0

Here everything is ok and there is nothing else to do. All installed packages are already the latest version.

It's simple: Try and use it

The examples above are quite simple and should only give you a first impression. But, from the start, it did not take me long to use Ansible for more complex tasks like the Poor Man's RHEL Mirror or the Ansible Role for RHEL Patchmanagment .

Today, Ansible saves me a lot of time and supports my day-to-day work tasks quite well. So what are you waiting for? Try it, use it, and feel a bit more comfortable at work. What to read next Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat) Topics: Automation Ansible AUTOMATION FOR EVERYONE

Getting started with Ansible Get started Jörg Kastning Joerg is a sysadmin for over ten years now. He is a member of the Red Hat Accelerators and runs his own blog at https://www.my-it-brain.de. More about me Related Content Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat)

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

Twitter Facebook 0 LinkedIn Reddit 33 Email Twitter Facebook 0 LinkedIn Reddit 33 Email x Subscribe now

Get the highlights in your inbox every week.

https://www.redhat.com/sysadmin/eloqua-embedded-email-capture-block.html?offer_id=701f20000012gE7AAI ✓ Thanks for sharing! Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Telegram Pocket Mix Tumblr Amazon Wish List AOL Mail Balatarin BibSonomy Bitty Browser Blinklist Blogger BlogMarks Bookmarks.fr Box.net Buffer Care2 News CiteULike Copy Link Design Float Diary.Ru Diaspora Digg Diigo Douban Draugiem DZone Evernote Facebook Messenger Fark Flipboard Folkd Google Bookmarks Google Classroom Google+ Hacker News Hatena Houzz Instapaper Kakao Kik Kindle It Known Line LiveJournal Mail.Ru Mastodon Mendeley Meneame Mixi MySpace Netvouz Odnoklassniki Outlook.com Papaly Pinboard Plurk Print PrintFriendly Protopage Bookmarks Pusha Qzone Rediff MyPage Refind Renren Sina Weibo SiteJot Skype Slashdot SMS StockTwits Svejo Symbaloo Bookmarks Threema Trello Tuenti Twiddla TypePad Post Viadeo Viber VK Wanelo WeChat WordPress Wykop XING Yahoo Mail Yoolink Yummly AddToAny Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Email Gmail AOL Mail Outlook.com Yahoo Mail More

https://static.addtoany.com/menu/sm.21.html#type=page&event=load&url=https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody&referrer=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fansible-it-automation-for-everybody-190731052032.html

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody

[Aug 03, 2019] Creating Bootable Linux USB Drive with Etcher

Aug 03, 2019 | linuxize.com

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.

Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.

Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:

  1. Connect the USB flash drive to your system and Launch Etcher.
  2. Click on the Select image button and locate the distribution .iso file.
  3. If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.

[Jul 30, 2019] FreeDOS turns 25 years old by Jim Hall

Jul 28, 2019 | opensource.com

FreeDOS turns 25 years old: An origin story The operating system's history is a great example of the open source software model: developers working together to create something.

Get the highlights in your inbox every week.

FreeDOS .

That's a major milestone for any open source software project, and I'm proud of the work that we've done on it over the past quarter century. I'm also proud of how we built FreeDOS because it is a great example of how the open source software model works.

For its time, MS-DOS was a powerful operating system. I'd used DOS for years, ever since my parents replaced our aging Apple II computer with a newer IBM machine. MS-DOS provided a flexible command line, which I quite liked and that came in handy to manipulate my files. Over the years, I learned how to write my own utilities in C to expand its command-line capabilities even further.

Around 1994, Microsoft announced that its next planned version of Windows would do away with MS-DOS. But I liked DOS. Even though I had started migrating to Linux, I still booted into MS-DOS to run applications that Linux didn't have yet.

I figured that if we wanted to keep DOS, we would need to write our own. And that's how FreeDOS was born.

On June 29, 1994, I made a small announcement about my idea to the comp.os.msdos.apps newsgroup on Usenet.

ANNOUNCEMENT OF PD-DOS PROJECT:
A few months ago, I posted articles relating to starting a public domain version of DOS. The general support for this at the time was strong, and many people agreed with the statement, "start writing!" So, I have

Announcing the first effort to produce a PD-DOS. I have written up a "manifest" describing the goals of such a project and an outline of the work, as well as a "task list" that shows exactly what needs to be written. I'll post those here, and let discussion follow.

While I announced the project as PD-DOS (for "public domain," although the abbreviation was meant to mimic IBM's "PC-DOS"), we soon changed the name to Free-DOS and later FreeDOS.

I started working on it right away. First, I shared the utilities I had written to expand the DOS command line. Many of them reproduced MS-DOS features, including CLS, DATE, DEL, FIND, HELP, and MORE. Some added new features to DOS that I borrowed from Unix, such as TEE and TRCH (a simple implementation of Unix's tr). I contributed over a dozen FreeDOS utilities

By sharing my utilities, I gave other developers a starting point. And by sharing my source code under the GNU General Public License (GNU GPL), I implicitly allowed others to add new features and fix bugs.

Other developers who saw FreeDOS taking shape contacted me and wanted to help. Tim Norman was one of the first; Tim volunteered to write a command shell (COMMAND.COM, later named FreeCOM). Others contributed utilities that replicated or expanded the DOS command line.

We released our first alpha version as soon as possible. Less than three months after announcing FreeDOS, we had an Alpha 1 distribution that collected our utilities. By the time we released Alpha 5, FreeDOS boasted over 60 utilities. And FreeDOS included features never imagined in MS-DOS, including internet connectivity via a PPP dial-up driver and dual-monitor support using a primary VGA monitor and a secondary Hercules Mono monitor.

New developers joined the project, and we welcomed them. By October 1998, FreeDOS had a working kernel, thanks to Pat Villani. FreeDOS also sported a host of new features that brought not just parity with MS-DOS but surpassed MS-DOS, including ANSI support and a print spooler that resembled Unix lpr.

You may be familiar with other milestones. We crept our way towards the 1.0 label, finally releasing FreeDOS 1.0 in September 2006, FreeDOS 1.1 in January 2012, and FreeDOS 1.2 in December 2016. MS-DOS stopped being a moving target long ago, so we didn't need to update as frequently after the 1.0 release.

Today, FreeDOS is a very modern DOS. We've moved beyond "classic DOS," and now FreeDOS features lots of development tools such as compilers, assemblers, and debuggers. We have lots of editors beyond the plain DOS Edit editor, including Fed, Pico, TDE, and versions of Emacs and Vi. FreeDOS supports networking and even provides a simple graphical web browser (Dillo). And we have tons of new utilities, including many that will make Linux users feel at home.

FreeDOS got where it is because developers worked together to create something. In the spirit of open source software, we contributed to each other's work by fixing bugs and adding new features. We treated our users as co-developers; we always found ways to include people, whether they were writing code or writing documentation. And we made decisions through consensus based on merit. If that sounds familiar, it's because those are the core values of open source software: transparency, collaboration, release early and often, meritocracy, and community. That's the open source way !

I encourage you to download FreeDOS 1.2 and give it a try.

[Jul 26, 2019] How To Check Swap Usage Size and Utilization in Linux by Vivek Gite

Jul 26, 2019 | www.cyberciti.biz

The procedure to check swap space usage and size in Linux is as follows:

  1. Open a terminal application.
  2. To see swap size in Linux, type the command: swapon -s .
  3. You can also refer to the /proc/swaps file to see swap areas in use on Linux.
  4. Type free -m to see both your ram and your swap space usage in Linux.
  5. Finally, one can use the top or htop command to look for swap space Utilization on Linux too.
How to Check Swap Space in Linux using /proc/swaps file

Type the following cat command to see total and used swap size:
# cat /proc/swaps
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0

Another option is to type the grep command as follows:
grep Swap /proc/meminfo

SwapCached:            0 kB
SwapTotal:        524284 kB
SwapFree:         524284 kB
Look for swap space in Linux using swapon command

Type the following command to show swap usage summary by device
# swapon -s
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0
Use free command to monitor swap space usage

Use the free command as follows:
# free -g
# free -k
# free -m

Sample outputs:

             total       used       free     shared    buffers     cached
Mem:         11909      11645        264          0        324       8980
-/+ buffers/cache:       2341       9568
Swap:         6143         64       6079
See swap size in Linux using vmstat command

Type the following vmstat command:
# vmstat
# vmstat 1 5

... ... ...

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

[Jul 26, 2019] The day the virtual machine manager died by Nathan Lager

"Dangerous" commands like dd should probably be always typed first in the editor and only when you verity that you did not make a blunder , executed...
A good decision was to go home and think the situation over, not to aggravate it with impulsive attempts to correct the situation, which typically only make it worse.
Lack of checking of the health of backups suggest that this guy is an arrogant sucker, despite his 20 years of sysadmin experience.
Notable quotes:
"... I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! ..."
"... Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. ..."
Jul 26, 2019 | www.redhat.com

... ... ...

See, my RHEV manager was a VM running on a stand-alone Kernel-based Virtual Machine (KVM) host, separate from the cluster it manages. I had been running RHEV since version 3.0, before hosted engines were a thing, and I hadn't gone through the effort of migrating. I was already in the process of building a new set of clusters with a new manager, but this older manager was still controlling most of our production VMs. It had filled its disk again, and the underlying database had stopped itself to avoid corruption.

See, for whatever reason, we had never set up disk space monitoring on this system. It's not like it was an important box, right?

So, I logged into the KVM host that ran the VM, and started the well-known procedure of creating a new empty disk file, and then attaching it via virsh . The procedure goes something like this: Become root , use dd to write a stream of zeros to a new file, of the proper size, in the proper location, then use virsh to attach the new disk to the already running VM. Then, of course, log into the VM and do your disk expansion.

I logged in, ran sudo -i , and started my work. I ran cd /var/lib/libvirt/images , ran ls -l to find the existing disk images, and then started carefully crafting my dd command:

dd ... bs=1k count=40000000 if=/dev/zero ... of=./vmname-disk ...

Which was the next disk again? <Tab> of=vmname-disk2.img <Back arrow, Back arrow, Back arrow, Back arrow, Backspace> Don't want to dd over the existing disk, that'd be bad. Let's change that 2 to a 3 , and Enter . OH CRAP, I CHANGED THE 2 TO A 2 NOT A 3 ! <Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C>

I still get sick thinking about this. I'd done the stupidest thing I possibly could have done, I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! (The kind that's at work late, trying to get this one little thing done before he heads off to see his friend. The kind that thinks he knows better, and thought he was careful enough to not make such a newbie mistake. Gah.)

So, how fast does dd start writing zeros? Faster than I can move my fingers from the Enter key to the Ctrl+C keys. I tried a number of things to recover the running disk from memory, but all I did was make things worse, I think. The system was still up, but still broken like it was before I touched it, so it was useless.

Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. The next day I owned up to the boss and co-workers pretty much the moment I walked in the door. We started taking an inventory of what we had, and what was lost. I had taken the precaution of setting up backups ages ago. So, we thought we had that to fall back on.

I opened a ticket with Red Hat support and filled them in on how dumb I'd been. I can only imagine the reaction of the support person when they read my ticket. I worked a help desk for years, I know how this usually goes. They probably gathered their closest coworkers to mourn for my loss, or get some entertainment out of the guy who'd been so foolish. (I say this in jest. Red Hat's support was awesome through this whole ordeal, and I'll tell you how soon. )

So, I figured the next thing I would need from my broken server, which was still running, was the backups I'd diligently been collecting. They were on the VM but on a separate virtual disk, so I figured they were safe. The disk I'd overwritten was the last disk I'd made to expand the volume the database was on, so that logical volume was toast, but I've always set up my servers such that the main mounts -- / , /var , /home , /tmp , and /root -- were all separate logical volumes.

In this case, /backup was an entirely separate virtual disk. So, I scp -r 'd the entire /backup mount to my laptop. It copied, and I felt a little sigh of relief. All of my production systems were still running, and I had my backup. My hope was that these factors would mean a relatively simple recovery: Build a new VM, install RHEV-M, and restore my backup. Simple right?

By now, my boss had involved the rest of the directors, and let them know that we were looking down the barrel of a possibly bad time. We started organizing a team meeting to discuss how we were going to get through this. I returned to my desk and looked through the backups I had copied from the broken server. All the files were there, but they were tiny. Like, a couple hundred kilobytes each, instead of the hundreds of megabytes or even gigabytes that they should have been.

Happy feeling, gone.

Turns out, my backups were running, but at some point after an RHEV upgrade, the database backup utility had changed. Remember how I said this system had existed since version 3.0? Well, 3.0 didn't have an engine-backup utility, so in my RHEV training, we'd learned how to make our own. Mine broke when the tools changed, and for who knows how long, it had been getting an incomplete backup -- just some files from /etc .

No database. Ohhhh ... Fudge. (I didn't say "Fudge.")

I updated my support case with the bad news and started wondering what it would take to break through one of these 4th-floor windows right next to my desk. (Ok, not really.)

At this point, we basically had three RHEV clusters with no manager. One of those was for development work, but the other two were all production. We started using these team meetings to discuss how to recover from this mess. I don't know what the rest of my team was thinking about me, but I can say that everyone was surprisingly supportive and un-accusatory. I mean, with one typo I'd thrown off the entire department. Projects were put on hold and workflows were disrupted, but at least we had time: We couldn't reboot machines, we couldn't change configurations, and couldn't get to VM consoles, but at least everything was still up and operating.

Red Hat support had escalated my SNAFU to an RHEV engineer, a guy I'd worked with in the past. I don't know if he remembered me, but I remembered him, and he came through yet again. About a week in, for some unknown reason (we never figured out why), our Windows VMs started dropping offline. They were still running as far as we could tell, but they dropped off the network, Just boom. Offline. In the course of a workday, we lost about a dozen windows systems. All of our RHEL machines were working fine, so it was just some Windows machines, and not even every Windows machine -- about a dozen of them.

Well great, how could this get worse? Oh right, add a ticking time bomb. Why were the Windows servers dropping off? Would they all eventually drop off? Would the RHEL systems eventually drop off? I made a panicked call back to support, emailed my account rep, and called in every favor I'd ever collected from contacts I had within Red Hat to get help as quickly as possible.

I ended up on a conference call with two support engineers, and we got to work. After about 30 minutes on the phone, we'd worked out the most insane recovery method. We had the newer RHEV manager I mentioned earlier, that was still up and running, and had two new clusters attached to it. Our recovery goal was to get all of our workloads moved from the broken clusters to these two new clusters.

Want to know how we ended up doing it? Well, as our Windows VMs were dropping like flies, the engineers and I came up with this plan. My clusters used a Fibre Channel Storage Area Network (SAN) as their storage domains. We took a machine that was not in use, but had a Fibre Channel host bus adapter (HBA) in it, and attached the logical unit numbers (LUNs) for both the old cluster's storage domains and the new cluster's storage domains to it. The plan there was to make a new VM on the new clusters, attach blank disks of the proper size to the new VM, and then use dd (the irony is not lost on me) to block-for-block copy the old broken VM disk over to the newly created empty VM disk.

I don't know if you've ever delved deeply into an RHEV storage domain, but under the covers it's all Logical Volume Manager (LVM). The problem is, the LV's aren't human-readable. They're just universally-unique identifiers (UUIDs) that the RHEV manager's database links from VM to disk. These VMs are running, but we don't have the database to reference. So how do you get this data?

virsh ...

Luckily, I managed KVM and Xen clusters long before RHEV was a thing that was viable. I was no stranger to libvirt 's virsh utility. With the proper authentication -- which the engineers gave to me -- I was able to virsh dumpxml on a source VM while it was running, get all the info I needed about its memory, disk, CPUs, and even MAC address, and then create an empty clone of it on the new clusters.

Once I felt everything was perfect, I would shut down the VM on the broken cluster with either virsh shutdown , or by logging into the VM and shutting it down. The catch here is that if I missed something and shut down that VM, there was no way I'd be able to power it back on. Once the data was no longer in memory, the config would be completely lost, since that information is all in the database -- and I'd hosed that. Once I had everything, I'd log into my migration host (the one that was connected to both storage domains) and use dd to copy, bit-for-bit, the source storage domain disk over to the destination storage domain disk. Talk about nerve-wracking, but it worked! We picked one of the broken windows VMs and followed this process, and within about half an hour we'd completed all of the steps and brought it back online.

We did hit one snag, though. See, we'd used snapshots here and there. RHEV snapshots are lvm snapshots. Consolidating them without the RHEV manager was a bit of a chore, and took even more leg work and research before we could dd the disks. I had to mimic the snapshot tree by creating symbolic links in the right places, and then start the dd process. I worked that one out late that evening after the engineers were off, probably enjoying time with their families. They asked me to write the process up in detail later. I suspect that it turned into some internal Red Hat documentation, never to be given to a customer because of the chance of royally hosing your storage domain.

Somehow, over the course of 3 months and probably a dozen scheduled maintenance windows, I managed to migrate every single VM (of about 100 VMs) from the old zombie clusters to the working clusters. This migration included our Zimbra collaboration system (10 VMs in itself), our file servers (another dozen VMs), our Enterprise Resource Planning (ERP) platform, and even Oracle databases.

We didn't lose a single VM and had no more unplanned outages. The Red Hat Enterprise Linux (RHEL) systems, and even some Windows systems, never fell to the mysterious drop-off that those dozen or so Windows servers did early on. During this ordeal, though, I had trouble sleeping. I was stressed out and felt so guilty for creating all this work for my co-workers, I even had trouble eating. No exaggeration, I lost 10lbs.

So, don't be like Nate. Monitor your important systems, check your backups, and for all that's holy, double-check your dd output file. That way, you won't have drama, and can truly enjoy Sysadmin Appreciation Day!

Nathan Lager is an experienced sysadmin, with 20 years in the industry. He runs his own blog at undrground.org, and hosts the Iron Sysadmin Podcast. More about me

[Jul 13, 2019] Find lost files with Scalpel - Enable SysAdmin

Jul 13, 2019 | www.redhat.com

As a system administrator, part of your responsibility is to help users manage their data. One of the vital aspects of doing that is to ensure your organization has a good backup plan, and that your users either make their backups regularly, or else don't have to because you've automated the process.

However, sometimes the worst happens. A file gets deleted by mistake, a filesystem becomes corrupt, or a partition gets lost, and for whatever reason, the backups don't contain what you need.

As we discussed in How to prevent and recover from accidental file deletion in Linux , before trying to recover lost data, you must find out why the data is missing in the first place. It's possible that a user has simply misplaced the file, or that there is a backup that the user isn't aware of. But if a user has indeed removed a file with no backups, then you know you need to recover a deleted file. If a partition table has become scrambled, though, then the files aren't really lost at all, and you might want to consider using TestDisk to recover the partition table, or the partition itself.

What happens if your file or partition recovery isn't successful, or is only in part? Then it's time for Scalpel . Scalpel performs file carving operations based on patterns describing unique file types. It looks for these patterns based on binary strings and regular expressions, and then extracts the file accordingly.

This tool isn't currently being maintained, but it's ever-reliable, compiling and running exactly as expected. If you're running Red Hat Enterprise Linux (RHEL) 7, RHEL 8, or Fedora , you can download Scalpel's RPM installers, along with its dependency, libtre , from klaatu.fedorapeople.org .

Starting with Scalpel

Scalpel comes bundled with a comprehensive list of file types and their most unique identifying features. Sometimes, a file can be identified by predictable text at its head and tail:

htm    n    50000   <html         </html>

While at other times, cryptic-looking hex codes are necessary:

jpg    y   200000000    \xff\xd8\xff\xe0\x00\x10    \xff\xd9

Scalpel expects you to duplicate /etc/scalpel.conf edit your copy to include the file types you hope to recover, and to exclude the file types you know you don't need. For instance, if you know you don't have or care about .fws files, then comment that line out of the file. Doing this can speed up the recovery process and reduce false positives.

In the configuration file, the format of a file definition is, from left to right:

The footer field is optional. If no footer is provided, then Scalpel extracts the number of bytes you set as the file type's maximum value.

You might find that a recovery effort only rescues part of a file, such as this mostly-recovered JPG:

This result means that you probably need to increase the file's bounds maximum value, and then re-scan, so that the end of the file can be recovered, too:

Defining new file types

First, make a copy of the Scalpel configuration file. If all your users generate similar data, then you may only need one config file for your entire organization. Or, you might find it better to have one config file per department.

To add your own file types to a Scalpel config, start with some investigative forensics.

For text files, you ideally have some predictable structure you can anticipate. For instance, an XML file probably starts with <xml and ends with </xml . Binary files are similarly predictable. Using the hexdump command, you can view a typical header from the file type you want to define. Here's the results for an XCF, the default layered graphic file from GIMP:

$ head --bytes 8 example.xcf | hexdump --canonical
00000000  67 69 6d 70 20 78 63 66         |gimp xcf|
00000008

This output is from a Red Hat Enterprise Linux 8 system. On older systems, an older syntax may be necessary:

$ head --bytes 8 example.xcf | hexdump -C
00000000  67 69 6d 70 20 78 63 66         |gimp xcf|
00000008

The canonical output of hexdump displays the address in the far left column, and the decoded values on the far right. In the center column are the hexadecimal bytes of the first 8 bytes of the XCF file's first line.

Most binary files in /etc/scalpel.conf look pretty similar to that output, except that these values are prefaced with the \x escape sequence to denote that the numbers are actually hexadecimal digits. For instance, a JPG file looks like this in the configuration file:

jpg     y     200000000     \xff\xd8\xff\xe0\x00\x10     \xff\xd9

Compare that value with a test hexdump of the first 6 bytes (because that's how many bytes scalpel.conf contains in its JPG definition) of any JPG file on your system:

$ head --bytes 6 example.jpg | | hexdump --canonical
00000000  ff d8 ff e0 00 10                    |......|
00000006

Compare the footer with the last 2 bytes to match what the config file shows:

$ tail --bytes -2 example.jpg | hexdump --canonical
00000000  ff d9                        |..|
00000002

These values match up, so you can be confident that valid JPG files probably all start and end in a predictable sequence.

Note: The Ogg entry in the scalpel.conf file is misleading, as it lacks the \x escape sequence. If you need to recover an Ogg file, fix this, or replace its definition.

Getting to work

Now, to obtain the same level of confidence for all files you need to recover (such as XCF, in the previous example). To reiterate, this is your workflow for defining the binary file types common to the victim drive:

  1. Get the hexadecimal values of the first few bytes of a file type using the head --bytes n command.
  2. Get the last few bytes using the tail --bytes -n command.
  3. Repeat this process on several different files of the same type to confirm consistency of this pattern, adjusting the length of your header and footer patterns as required.
  4. Enter the header and footer values into your custom Scalpel config, using the \x notation to identify each byte as a hexadecimal character.

Follow this sequence for each important binary file type you need to recover.

If a file is plaintext, provide a common header and footer, such as #!/bin/sh for shell scripts, # (the space after the # is important) for markdown files with an h1 level title, <xml for XML files, and so on.

When you're ready to run Scalpel, create a directory where it can place your rescued files:

$ mkdir /run/media/seth/rescuer/scalped

Note: Do not create this directory on the same volume that contains the lost data.

If the victim drive is not yet mounted, mount it, and then run Scalpel:

$ scalpel -c my-scalpel.conf \
  -o /run/media/seth/rescuer/scalped \
  /run/media/seth/victim

You can also run Scalpel on a disk image:

$ scalpel -c my-scalpel.conf \
  -o ~/scalped ~/victim.img

When Scalpel is done, review the files in your designated rescue directory.

All in all, it's best to make backups so you can avoid doing file recovery at all. But, should the worst happen, try Scalpel and carve carefully.

[Jun 23, 2019] Utilizing multi core for tar+gzip-bzip compression-decompression

Highly recommended!
Notable quotes:
"... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
"... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
Jun 23, 2019 | stackoverflow.com

user1118764 , Sep 7, 2012 at 6:58

I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

Is there any way I can utilize the unused cores to make it faster?

Warren Severin , Nov 13, 2017 at 4:37

The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

Mark Adler , Sep 7, 2012 at 14:48

You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
tar cf - paths-to-archive | pigz > archive.tar.gz

By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

user788171 , Feb 20, 2013 at 12:43

How do you use pigz to decompress in the same fashion? Or does it only work for compression?

Mark Adler , Feb 20, 2013 at 16:18

pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression.

The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores.

Garrett , Mar 1, 2014 at 7:26

The hyphen here is stdout (see this page ).

Mark Adler , Jul 2, 2014 at 21:29

Yes. 100% compatible in both directions.

Mark Adler , Apr 23, 2015 at 5:23

There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files.

Jen , Jun 14, 2013 at 14:34

You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

For example use:

tar -c --use-compress-program=pigz -f tar.file dir_to_zip

Valerio Schiavoni , Aug 5, 2014 at 22:38

Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

bovender , Sep 18, 2015 at 10:14

@ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

Valerio Schiavoni , Sep 28, 2015 at 23:41

On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

Offenso , Jan 11, 2017 at 17:26

I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

Maxim Suslov , Dec 18, 2014 at 7:31

Common approach

There is option for tar program:

-I, --use-compress-program PROG
      filter through PROG (must accept -d)

You can use multithread version of archiver or compressor utility.

Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
$ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

$ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
$ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz

Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

p7zip

For p7zip for compression you need a small shell script like the following:

#!/bin/sh
case $1 in
  -d) 7za -txz -si -so e;;
   *) 7za -txz -si -so a .;;
esac 2>/dev/null

Save it as 7zhelper.sh. Here the example of usage:

$ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
$ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
xz

Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

This is a fragment of man for 5.1.0alpha version:

Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

Recompiling with replacement

If you build tar from sources, then you can recompile with parameters

--with-gzip=pigz
--with-bzip2=lbzip2
--with-lzip=plzip

After recompiling tar with these options you can check the output of tar's help:

$ tar --help | grep "lbzip2\|plzip\|pigz"
  -j, --bzip2                filter the archive through lbzip2
      --lzip                 filter the archive through plzip
  -z, --gzip, --gunzip, --ungzip   filter the archive through pigz

mpibzip2 , Apr 28, 2015 at 20:57

I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

selurvedu , May 26, 2016 at 22:13

Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

panticz.de , Sep 1, 2014 at 15:02

You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/

einpoklum , Feb 11, 2017 at 15:59

A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59
If you want to have more flexibility with filenames and compression options, you can use:
find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
tar -P --transform='s@/my/path/@@g' -cf - {} + | \
pigz -9 -p 4 > myarchive.tar.gz
Step 1: find

find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

-exec will execute the next command using the results of find : tar

Step 2: tar

tar -P --transform='s@/my/path/@@g' -cf - {} +

--transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

-P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

-cf - tells tar to use the tarball name we'll specify later

{} + uses everyfiles that find found previously

Step 3: pigz

pigz -9 -p 4

Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

Step 4: archive name

> myarchive.tar.gz

Finally.

[Jun 23, 2019] Relationship between Fedora version and CentOS version

Notable quotes:
"... F19 -> RHEL7 ..."
Jun 23, 2019 | ask.fedoraproject.org

CentOS is based on RHEL and the relationships between Fedora and RHEL can be found in the Wikipedia article about RHEL

In short, there is no "rule"

That link should work:

http://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Relationship_to_free_and_community_distributions

jmt ( Oct 25 '13 )

Updated link: https://en.wikipedia.org/wiki/Red_Hat... GregMartyn gravatar image GregMartyn ( Mar 9 '16 )

[Jun 22, 2019] Using SSH X session forwarding by Seth Kenlon

Jun 22, 2019 | www.redhat.com

Normally, you would forward a remote computer's X11 graphical display to your local computer with the -X option, but the OpenSSH application places additional security limits on such connections as a precaution. As long as you're starting a shell on a trusted machine, you can use the -Y option to opt out of the excess security:

$ ssh -Y 93.184.216.34

Now you can launch an instance of any one of the remote computer's applications, but have it appear on your screen. For instance, try launching the Nautilus file manager:

remote$ nautilus &

The result is a Nautilus file manager window on your screen, displaying files on the remote computer. Your user can't see the window you're seeing, but at least you have graphical access to what they are using. Through this, you can debug, modify settings, or perform actions that are otherwise unavailable through a normal text-based SSH session.

Keep in mind, though, that a forwarded X11 session does not bring the whole remote session to you. You don't have access to the target computer's audio playback, for example, though you can make the remote system play audio through its speakers. You also can't access any custom application themes on the target computer, and so on (at least, not without some skillful redirection of environment variables).

However, if you only need to view files or use an application that you don't have access to locally, forwarding X can be invaluable.

[Jun 22, 2019] Using SSH and Tmux for screen sharing Enable by Seth Kenlon Tmux

Jun 22, 2019 | www.redhat.com

Tmux is a screen multiplexer, meaning that it provides your terminal with virtual terminals, allowing you to switch from one virtual session to another. Modern terminal emulators feature a tabbed UI, making the use of Tmux seem redundant, but Tmux has a few peculiar features that still prove difficult to match without it.

First of all, you can launch Tmux on a remote machine, start a process running, detach from Tmux, and then log out. In a normal terminal, logging out would end the processes you started. Since those processes were started in Tmux, they persist even after you leave.

Secondly, Tmux can "mirror" its session on multiple screens. If two users log into the same Tmux session, then they both see the same output on their screens in real time.

Tmux is a lightweight, simple, and effective solution in cases where you're training someone remotely, debugging a command that isn't working for them, reviewing text, monitoring services or processes, or just avoiding the ten minutes it sometimes takes to read commands aloud over a phone clearly enough that your user is able to accurately type them.

To try this option out, you must have two computers. Assume one computer is owned by Alice, and the other by Bob. Alice remotely logs into Bob's PC and launches a Tmux session:

alice$ ssh bob.local
alice$ tmux

On his PC, Bob starts Tmux, attaching to the same session:

bob$ tmux attach

When Alice types, Bob sees what she is typing, and when Bob types, Alice sees what he's typing.

It's a simple but effective trick that enables interactive live sessions between computer users, but it is entirely text-based.

Collaboration

With these two applications, you have access to some powerful methods of supporting users. You can use these tools to manage systems remotely, as training tools, or as support tools, and in every case, it sure beats wandering around the office looking for somebody's desk. Get familiar with SSH and Tmux, and start using them today.

[Jun 17, 2019] Accessing remote desktops by Seth Kenlon

Jun 17, 2019 | www.redhat.com

Accessing remote desktops Need to see what's happening on someone else's screen? Here's what you need to know about accessing remote desktops.

Posted June 13, 2019 | by Seth Kenlon (Red Hat) Anyone who's worked a support desk has had the experience: sometimes, no matter how descriptive your instructions, and no matter how concise your commands, it's just easier and quicker for everyone involved to share screens. Likewise, anyone who's ever maintained a server located in a loud and chilly data center -- or across town, or the world -- knows that often a remote viewer is the easiest method for viewing distant screens.

Linux is famously capable of being managed without seeing a GUI, but that doesn't mean you have to manage your box that way. If you need to see the desktop of a computer that you're not physically in front of, there are plenty of tools for the job.

Barriers

Half the battle of successfully screen sharing is getting into the target computer. That's by design, of course. It should be difficult to get into a computer without explicit consent.

Usually, there are up to 3 blockades for accessing a remote machine:

  1. The network firewall
  2. The target computer's firewall
  3. Screen share settings

Specific instruction on how to get past each barrier is impossible. Every network and every computer is configured uniquely, but here are some possible solutions.

Barrier 1: The network firewall

A network firewall is the target computer's LAN entry point, often a part of the router (whether an appliance from an Internet Service Provider or a dedicated server in a rack). In order to pass through the firewall and access a computer remotely, your network firewall must be configured so that the appropriate port for the remote desktop protocol you're using is accessible.

The most common, and most universal, protocol for screen sharing is VNC.

If the network firewall is on a Linux server you can access, you can broadly allow VNC traffic to pass through using firewall-cmd , first by getting your active zone, and then by allowing VNC traffic in that zone:

$ sudo firewall-cmd --get-active-zones
example-zone
  interfaces: enp0s31f6
$ sudo firewall-cmd --add-service=vnc-server --zone=example-zone

If you're not comfortable allowing all VNC traffic into the network, add a rich rule to firewalld in order to let in VNC traffic from only your IP address. For example, using an example IP address of 93.184.216.34, a rule to allow VNC traffic is:

$ sudo firewall-cmd \
--add-rich-rule='rule family="ipv4" source address="93.184.216.34" service name=vnc-server accept'

To ensure the firewall changes were made, reload the rules:

$ sudo firewall-cmd --reload

If network reconfiguration isn't possible, see the section "Screen sharing through a browser."

Barrier 2: The computer's firewall

Most personal computers have built-in firewalls. Users who are mindful of security may actively manage their firewall. Others, though, blissfully trust their default settings. This means that when you're trying to access their computer for screen sharing, their firewall may block incoming remote connection requests without the user even realizing it. Your request to view their screen may successfully pass through the network firewall only to be silently dropped by the target computer's firewall.

Changing zones in Linux.

To remedy this problem, have the user either lower their firewall or, on Fedora and RHEL, place their computer into the trusted zone. Do this only for the duration of the screen sharing session. Alternatively, have them add either one of the rules you added to the network firewall (if your user is on Linux).

A reboot is a simple way to ensure the new firewall setting is instantiated, so that's probably the easiest next step for your user. Power users can instead reload the firewall rules manually :

$ sudo firewall-cmd --reload

If you have a user override their computer's default firewall, remember to close the session by instructing them to re-enable the default firewall zone. Don't leave the door open behind you!

Barrier 3: The computer's screen share settings

To share another computer's screen, the target computer must be running remote desktop software (technically, a remote desktop server , since this software listens to incoming requests). Otherwise, you have nothing to connect to.

Some desktops, like GNOME, provide screen sharing options, which means you don't have to launch a separate screen sharing application. To activate screen sharing in GNOME, open Settings and select Sharing from the left column. In the Sharing panel, click on Screen Sharing and toggle it on:

Remote desktop viewers

There are a number of remote desktop viewers out there. Here are some of the best options.

GNOME Remote Desktop Viewer

The GNOME Remote Desktop Viewer application is codenamed Vinagre . It's a simple application that supports multiple protocols, including VNC, Spice, RDP, and SSH. Vinagre's interface is intuitive, and yet this application offers many options, including whether you want to control the target computer or only view it.

If Vinagre's not already installed, use your distribution's package manager to add it. On Red Hat Enterprise Linux and Fedora , use:

$ sudo dnf install vinagre

In order to open Vinagre, go to the GNOME desktop's Activities menu and launch Remote Desktop Viewer . Once it opens, click the Connect button in the top left corner. In the Connect window that appears, select the VNC protocol. In the Host field, enter the IP address of the computer you're connecting to. If you want to use the computer's hostname instead, you must have a valid DNS service in place, or Avahi , or entries in /etc/hosts . Do not prepend your entry with a username.

Select any additional options you prefer, and then click Connect .

If you use the GNOME Remote Desktop Viewer as a full-screen application, move your mouse to the screen's top center to reveal additional controls. Most importantly, the exit fullscreen button.

If you're connecting to a Linux virtual machine, you can use the Spice protocol instead. Spice is robust, lightweight, and transmits both audio and video, usually with no noticeable lag.

TigerVNC and TightVNC

Sometimes you're not on a Linux machine, so the GNOME Remote Desktop Viewer isn't available. As usual, open source has an answer. In fact, open source has several answers, but two popular ones are TigerVNC and TightVNC , which are both cross-platform VNC viewers. TigerVNC offers separate downloads for each platform, while TightVNC has a universal Java client.

Both of these clients are simple, with additional options included in case you need them. The defaults are generally acceptable. In order for these particular clients to connect, turn off the encryption setting for GNOME's embedded VNC server (codenamed Vino) as follows:

$ gsettings set org.gnome.Vino require-encryption false

This modification must be done on the target computer before you attempt to connect, either in person or over SSH.

Red Hat Enterprise Linux 7 remoted to RHEL 8 with TightVNC

Use the option for an SSH tunnel to ensure that your VNC connection is fully encrypted.

Screen sharing through a browser

If network re-configuration is out of the question, sharing over an online meeting or collaboration platform is yet another option. The best open source platform for this is Nextcloud , which offers screen sharing over plain old HTTPS. With no firewall exceptions and no additional encryption required, Nextcloud's Talk app provides video and audio chat, plus whole-screen sharing using WebRTC technology.

This option requires a Nextcloud installation, but given that it's the best open source groupware package out there, it's probably worth looking at if you're not already running an instance. You can install Nextcloud yourself, or you can purchase hosting from Nextcloud.

To install the Talk app, go to Nextcloud's app store. Choose the Social & Communication category and then select the Talk plugin.

Next, add a user for the target computer's owner. Have them log into Nextcloud, and then click on the Talk app in the top left of the browser window.

When you start a new chat with your user, they'll be prompted by their browser to allow notifications from Nextcloud. Whether they accept or decline, Nextcloud's interface alerts them of the incoming call in the notification area at the top right corner.

Once you're in the call with your remote user, have them click on the Share screen button at the bottom of their chat window.

Remote screens

Screen sharing can be an easy method of support as long as you plan ahead so your network and clients support it from trusted sources. Integrate VNC into your support plan early, and use screen sharing to help your users get better at what they do. Topics: Linux Seth Kenlon Seth Kenlon is a free culture advocate and UNIX geek.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Faccessing-remote-desktops

[May 24, 2019] Deal with longstanding issues like government favoritism toward local companies

May 24, 2019 | theregister.co.uk

How is it that that can be a point of contention ? Name me one country in this world that doesn't favor local companies.

These people company representatives who are complaining about local favoritism would be howling like wolves if Huawei was given favor in the US over any one of them.

I'm not saying that there are no reasons to be unhappy about business with China, but that is not one of them. 6 0 Reply


A.P. Veening , 1 day

Re: "deal with longstanding issues like government favoritism toward local companies"

Name me one country in this world that doesn't favor local companies.

I'll give you two: Liechtenstein and Vatican City, though admittedly neither has a lot of local companies.

STOP_FORTH , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

Doesn't Liechtenstein make most of the dentures in the EU. Try taking a bite out of that market.

Kabukiwookie , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

How can you leave Andorra out of that list?

A.P. Veening , 14 hrs
Re: "deal with longstanding issues like government favoritism toward local companies"

While you are at it, how can you leave Monaco and San Marino out of that list?

[May 24, 2019] Huawei equipment can't be trusted? As distinct from Cisco which we already have backdoored :]

May 24, 2019 | theregister.co.uk

" The Trump administration, backed by US cyber defense experts, believes that Huawei equipment can't be trusted " .. as distinct from Cisco which we already have backdoored :]

Sir Runcible Spoon
Re: Huawei equipment can't be trusted?

Didn't someone once say "I don't trust anyone who can't be bribed"?

Not sure why that popped into my head.

[May 24, 2019] The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them

May 24, 2019 | theregister.co.uk

Pick your poison

The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them . If you don't use Huawei who would you use instead? Cisco? Yes, just open up and let the NSA ream your ports. Oooo, filthy.

If you don't know the chip design, can't verify the construction, don't know the code and can't verify the deployment to the hardware; you are already owned.

The only question is, but which state actor; China, USA, Israel, UK.....? Anonymous Coward

[May 24, 2019] This is going to get ugly

May 24, 2019 | theregister.co.uk

..and we're all going to be poorer for it. Americans, Chinese and bystanders.

I was recently watching the WW1 channel on youtube (awesome thing, go Indy and team!) - the delusion, lack of situational understanding and short sightedness underscoring the actions of the main actors that started the Great War can certainly be paralleled to the situation here.

The very idea that you can manage to send China 40 years back in time with no harm on your side is bonkers.

[May 24, 2019] Networks are usually highly segmented and protected via firewalls and proxy. so access to routers from Internet is impossible

You can put backdoor in the router. The problem is that you will never be able to access it. also for improtant deployment countires inpect the source code of firmware. USA is playing dirty games here., no matter whether Chinese are right or wrong.
May 24, 2019 | theregister.co.uk
Re: Technological silos

They're not necessarily silos. If you design a network as a flat space with all interactions peer to peer then you have set yourself the problem of ensuring all nodes on that network are secure and enforcing traffic rules equally on each node. This is impractical -- its not that if couldn't be done but its a huge waste of resources. A more practical strategy is to layer the network, providing choke points where traffic can be monitored and managed. We currently do this with firewalls and demilitarized zones, the goal being normally to prevent unwanted traffic coming in (although it can be used to monitor and control traffic going out). This has nothing to do with incompatible standards.

I'm not sure about the rest of the FUD in this article. Yes, its all very complicated. But just as we have to know how to layer our networks we also know how to manage our information. For example, anyone who as a smartphone that they co-mingle sensitive data and public access on, relying on the integrity of its software to keep everything separate, is just plain asking for trouble. Quite apart from the risk of data leakage between applications its a portable device that can get lost, stolen or confiscated (and duplicated.....). Use common sense. Manage your data.

[May 24, 2019] Internet and phones aren't the issue. Its the chips

Notable quotes:
"... The real issue is the semiconductors - the actual silicon. ..."
"... China has some fabs now, but far too few to handle even just their internal demand - and tech export restrictions have long kept their leading edge capabilities significantly behind the cutting edge. ..."
"... On the flip side: Foxconn, Huawei et al are so ubiquitous in the electronics global supply chain that US retail tech companies - specifically Apple - are going to be severely affected, or at least extremely vulnerable to being pushed forward as a hostage. ..."
May 24, 2019 | theregister.co.uk

Duncan Macdonald

Internet, phones, Android aren't the issue - except if the US is able to push China out of GSM/ITU.

The real issue is the semiconductors - the actual silicon.

The majority of raw silicon wafers as well as the finished chips are created in the US or its most aligned allies: Japan, Taiwan. The dominant manufacturers of semiconductor equipment are also largely US with some Japanese and EU suppliers.

If Fabs can't sell to China, regardless of who actually paid to manufacture the chips, because Applied Materials has been banned from any business related to China, this is pretty severe for 5-10 years until the Chinese can ramp up their capacity.

China has some fabs now, but far too few to handle even just their internal demand - and tech export restrictions have long kept their leading edge capabilities significantly behind the cutting edge.

On the flip side: Foxconn, Huawei et al are so ubiquitous in the electronics global supply chain that US retail tech companies - specifically Apple - are going to be severely affected, or at least extremely vulnerable to being pushed forward as a hostage.

Interesting times...

[May 24, 2019] We shared and the Americans shafted us. And now *they* are bleating about people not respecting Intellectual Property Rights?

Notable quotes:
"... The British aerospace sector (not to be confused with the company of a similar name but more Capital Letters) developed, amongst other things, the all-flying tailplane, successful jet-powered VTOL flight, noise-and drag-reducing rotor blades and the no-tailrotor systems and were promised all sorts of crunchy goodness if we shared it with our wonderful friends across the Atlantic. ..."
"... We shared and the Americans shafted us. Again. And again. And now *they* are bleating about people not respecting Intellectual Property Rights? ..."
May 24, 2019 | theregister.co.uk

Anonymous Coward

Sic semper tyrannis

"Without saying so publicly, they're glad there's finally some effort to deal with longstanding issues like government favoritism toward local companies, intellectual property theft, and forced technology transfers."

The British aerospace sector (not to be confused with the company of a similar name but more Capital Letters) developed, amongst other things, the all-flying tailplane, successful jet-powered VTOL flight, noise-and drag-reducing rotor blades and the no-tailrotor systems and were promised all sorts of crunchy goodness if we shared it with our wonderful friends across the Atlantic.

We shared and the Americans shafted us. Again. And again. And now *they* are bleating about people not respecting Intellectual Property Rights?

And as for moaning about backdoors in Chinese kit, who do Cisco et al report to again? Oh yeah, those nice Three Letter Acronym people loitering in Washington and Langley...

[May 24, 2019] Oh dear. Secret Huawei enterprise router snoop 'backdoor' was Telnet service, sighs Vodafone The Register

May 24, 2019 | theregister.co.uk

A claimed deliberate spying "backdoor" in Huawei routers used in the core of Vodafone Italy's 3G network was, in fact, a Telnet -based remote debug interface.

The Bloomberg financial newswire reported this morning that Vodafone had found "vulnerabilities going back years with equipment supplied by Shenzhen-based Huawei for the carrier's Italian business".

"Europe's biggest phone company identified hidden backdoors in the software that could have given Huawei unauthorized access to the carrier's fixed-line network in Italy," wailed the newswire.

Unfortunately for Bloomberg, Vodafone had a far less alarming explanation for the deliberate secret "backdoor" – a run-of-the-mill LAN-facing diagnostic service, albeit a hardcoded undocumented one.

"The 'backdoor' that Bloomberg refers to is Telnet, which is a protocol that is commonly used by many vendors in the industry for performing diagnostic functions. It would not have been accessible from the internet," said the telco in a statement to The Register , adding: "Bloomberg is incorrect in saying that this 'could have given Huawei unauthorized access to the carrier's fixed-line network in Italy'.

"This was nothing more than a failure to remove a diagnostic function after development."

It added the Telnet service was found during an audit, which means it can't have been that secret or hidden: "The issues were identified by independent security testing, initiated by Vodafone as part of our routine security measures, and fixed at the time by Huawei."

Huawei itself told us: "We were made aware of historical vulnerabilities in 2011 and 2012 and they were addressed at the time. Software vulnerabilities are an industry-wide challenge. Like every ICT vendor we have a well-established public notification and patching process, and when a vulnerability is identified we work closely with our partners to take the appropriate corrective action."

Prior to removing the Telnet server, Huawei was said to have insisted in 2011 on using the diagnostic service to configure and test the network devices. Bloomberg reported, citing a leaked internal memo from then-Vodafone CISO Bryan Littlefair, that the Chinese manufacturer thus refused to completely disable the service at first:

Vodafone said Huawei then refused to fully remove the backdoor, citing a manufacturing requirement. Huawei said it needed the Telnet service to configure device information and conduct tests including on Wi-Fi, and offered to disable the service after taking those steps, according to the document.

El Reg understands that while Huawei indeed resisted removing the Telnet functionality from the affected items – broadband network gateways in the core of Vodafone Italy's 3G network – this was done to the satisfaction of all involved parties by the end of 2011, with another network-level product de-Telnet-ised in 2012.

Broadband network gateways in 3G UMTS mobile networks are described in technical detail in this Cisco (sorry) PDF . The devices are also known as Broadband Remote Access Servers and sit at the edge of a network operator's core.

The issue is separate from Huawei's failure to fully patch consumer-grade routers , as exclusively revealed by The Register in March.

Plenty of other things (cough, cough, Cisco) to panic about

Characterising this sort of Telnet service as a covert backdoor for government spies is a bit like describing your catflap as an access portal that allows multiple species to pass unhindered through a critical home security layer. In other words, massively over-egging the pudding.

Many Reg readers won't need it explaining, but Telnet is a routinely used method of connecting to remote devices for management purposes. When deployed with appropriate security and authentication controls in place, it can be very useful. In Huawei's case, the Telnet service wasn't facing the public internet, and was used to set up and test devices.

Look, it's not great that this was hardcoded into the equipment and undocumented – it was, after all, declared a security risk – and had to be removed after some pressure. However, it's not quite the hidden deliberate espionage backdoor for Beijing that some fear.

Twitter-enabled infoseccer Kevin Beaumont also shared his thoughts on the story, highlighting the number of vulns in equipment from Huawei competitor Cisco, a US firm:

me title=

For example, a pretty bad remote access hole was discovered in some Cisco gear , which the mainstream press didn't seem too fussed about. Ditto hardcoded root logins in Cisco video surveillance boxes. Lots of things unfortunately ship with insecure remote access that ought to be removed; it's not evidence of a secret backdoor for state spies.

Given Bloomberg's previous history of trying to break tech news, when it claimed that tiny spy chips were being secretly planted on Supermicro server motherboards – something that left the rest of the tech world scratching its collective head once the initial dust had settled – it may be best to take this latest revelation with a pinch of salt. Telnet wasn't even mentioned in the latest report from the UK's Huawei Cyber Security Evaluation Centre, which savaged Huawei's pisspoor software development practices.

While there is ample evidence in the public domain that Huawei is doing badly on the basics of secure software development, so far there has been little that tends to show it deliberately implements hidden espionage backdoors. Rhetoric from the US alleging Huawei is a threat to national security seems to be having the opposite effect around the world.

With Bloomberg, an American company, characterising Vodafone's use of Huawei equipment as "defiance" showing "that countries across Europe are willing to risk rankling the US in the name of 5G preparedness," it appears that the US-Euro-China divide on 5G technology suppliers isn't closing up any time soon. ®

Bootnote

This isn't shaping up to be a good week for Bloomberg. Only yesterday High Court judge Mr Justice Nicklin ordered the company to pay up £25k for the way it reported a live and ongoing criminal investigation.

[May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing

Highly recommended!
Notable quotes:
"... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..."
"... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..."
"... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..."
"... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..."
"... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..."
"... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..."
May 17, 2019 | www.nakedcapitalism.com

The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly no risk posed to their business model.

Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled "Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path.

Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards.

Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve).

All of these pernicious concepts are branches of the same poisoned tree: " shareholder capitalism ":

[A] notion best epitomized by Milton Friedman that the only social responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally.

"Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can.

It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism.

Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the 2008 financial disaster, it was a politically engineered bailout).

RONA in Practice

When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper, machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap.

The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines search for less shoddy alternatives.

You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts a magic bubble faster than reality, particularly if it's bad reality.

The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely, a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still carried the same price, someone had poured out the contents and replaced them with cheap plonk.

And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny. This is what happens when you're allowed to " self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government that the technology complied with federal safety regulations."

This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser.

Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the company. According to OpenSecrets.org , Boeing and its affiliates spent $15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further $4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending sums for the company.

But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through the cracks so easily.

[May 03, 2019] How can we regularly update a disconnected system (A system without internet connection)

Aug 10, 2017 | access.redhat.com
Environment Issue Resolution

Depending on the environment and circumstances, there are different approaches for updating an offline system.

Approach 1: Red Hat Satellite

For this approach a Red Hat Satellite server is deployed. The Satellite receives the latest packages from Red Hat repositories. Client systems connect to the Satellite and install updates. More details on Red Hat Satellite are available here: https://www.redhat.com/red_hat_network/ . Please also refer to the document Update a Disconnected Red Hat Network Satellite .


Approach 2: Download the updates on a connected system

If a second, similar system exists

then the second system can download applicable errata packages. After downloading the errata packages can be applied to other systems. More documentation: How to update offline RHEL server without network connection to Red Hat Network/Proxy/Satellite? .


Approach 3: Update with new minor release media

DVD media of new RHEL minor releases (i.e. RHEL6.1) are available from RHN. These media images can directly on the system be used for updating, or offered i.e. via http and be used from other systems as a yum repository for updating. For more details refer to:


Approach 4: Manually downloading and installing or updating packages

It is possible to download and install errata packages. For details refer to this document: How do I download security RPMs using the Red Hat Errata Website? .


Approach 5: Create a Local Repository

This approach is applicable to RHEL 5/6/7. With a registered server that is connected to Red Hat repositories, and is the same Major version. The connected system can use reposync to download all the rpms from a specified repository into a local directory. Then using http,nfs,ftp,or targeting a local directory (file://) this can be configured as a repository which yum can use to install packages and resolve dependencies.

How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?

Checking the security erratas :-

To check the security erratas on the system that is not connected to the internet, download the copy the updateinfo.xml.gz file from the identical registered system. The detailed steps can be checked in How to update security Erratas on system which is not connected to internet ? knowledgebase.

Root Cause

Without a connection to the RHN/RHSM the updates have to be transferred over other paths. These are partly hard to implement and automate.

This solution is part of Red Hat's fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form. 8 Comments Log in to comment RZ Community Member 26 points


1 February 2012 3:56 PM Randy Zagar

"Approach 3: update with new minor release media" will not work with RHEL 6. Many packages (over 1500) in the "optional" channels simply are not present on any iso images. There is an open case , but the issue will not be addressed before RHEL 6 Update 4 (and possibly never).

22 May 2014 9:27 AM Umesh Susvirkar

I agree with "Randy Zager" "optional" packages should be available offline along with other channels which are not available in ISO's.

12 July 2016 7:20 PM Adrian Kennedy

Can Approach 5 "additional server, reposync fetching" be applied with RHEL 7 servers?

4 August 2016 8:55 AM Dejan Cugalj

Raw

Yes. You need to:
- subscribe server to RH 
- synchronize repositories with reposync util
- up to 40GB per major release of RHEL.
16 August 2016 10:54 PM Michael White

However, won't I need to stand up another RHEL 7 server in additional to the RHEL 6 server?

8 August 2017 7:01 PM John Castranio

Correct. When using an external server to reposync updates, you will need one system for each Major Version of RHEL that you want to sync packages from.

RHEL 7 does not have access to RHEL 6 repositories just as RHEL 6 can't access RHEL 7 repositories

15 January 2019 10:14 PM BRIAN KEYES

what I am looking for is the instructions on the reposync install AND how to update off line clients

do I have to manually install apache?

16 January 2019 10:50 PM Randy Zagar

You will need: a RH Satellite or RH Proxy server, an internal yum server, and a RHN client for each OS variant (and architecture) you intend to support "offline". E.g. supporting 6Server, 6Client, and 6Workstation for i686 and x86_64 would normally require 6 RHN clients, but only three RHN clients would be necessary for RHEL7, as there's no support for i686 architecture

Yum clients can (according to the docs) use nfs resource paths in the baseurl statement, so apache is not strictly necessary on your yum server, but most people do it that way...

Each RHN client will need: local storage to store packages downloaded via reposync (e.g. "reposync -d -g -l -n -t -p /my/storage --repoid=rhel-i686-workstation-optional-6"). You'll need to run "createrepo" on each repository that gets updated, and you'll need to create an rsync service that provides access to each clients' /my/storage volume

Your internal yum server will need a cron script to run rsync against your RHN clients so you can collect all these software channels in one spot.

You'll also need to create custom yum repo files for your client systems (e.g. redhat-6Workstation.repo) that will point to the correct repositories on your yum server.

I'd recommend you NOT run these cron scripts during normal business hours... your sys-admins will want a stable copy so they can clone things for other offline networks.

If you're clever, you can convince one RHN client system to impersonate the different OS variants, reducing the number of systems you need to deploy.

You'll also most likely want to run "hardlink" on your yum server pretty regularly as there's lots of redundant packages across each OS variant.

[May 03, 2019] How to enable/disable repository using Subscription Manager or Yum-Utils by Roshan V Sharma

Sep 29, 2017 | developers.redhat.com

This blog is to resolve the following issues/answering the following questions.

  1. How to enable a repository using the Red Hat Subscription Manager/yum?
  2. Need to access a repository using the Red Hat Subscription Manager/yum?
  3. How to disable a repository using the Red Hat Subscription Manager/yum?
  4. How to subscribe a child channel using the Red Hat Subscription Manager/yum?

To enable/disable repository using Subscription-Manger or Yum-Utils you'll need:

  1. Red Hat Enterprise Linux 6 or higher.
  2. Red Hat Subscription Management (RHSM).
  3. Red Hat Subscription Manager.
  4. System registered to RHN classic (see system register with rhn classic ).

If you are using the latest version Red Hat Enterprise Linux then it will be very useful (I suggest to update your system at regular intervals).

Develop using Red Hat's most valuable products

Your membership unlocks Red Hat products and technical training on enterprise cloud application development.

JOIN RED HAT DEVELOPER

Solution

Before that, we need to know what is a repository.

A repository is content, based on product and contents of a delivery network. The system is subscribed to products and is defined in the rhsm.conf file in your systems.

For enabling a repository, you have to be the root user.

Then check the repository list

To enable repository

change ReposName to the repository name you want.

To disable repository

change ReposName to the repository name you want.

In some university/enterprise, subscription manager is blocked for that they can use yum to enable or disable any repository.

For using yum to enable.disable repos you need to install config-manager attribute for that using yum-utils.

Before enabling repository to make sure that all repository is in a stable state.

To check enabled repository

To enable repository

change ReposName to your repository name.

To disable repository

change ReposName to your repository name.

Some installing packages use –enablerepo to install packages

And to disabling the Subscription Manager

When a system is registered using a subscription manager a file name redhat.repo is created, it is a special yum repository. Maintaining redhat.repo in some environments may not be desirable. It can create static in content management operation if that repo is not the actual one used for subscription. You can disable it by making the rshm.manage repo setting it to a value of zero (0).

Any doubts/questions please comment below.

Take advantage of your Red Hat Developers membership and download RHEL today at no cost.

[May 02, 2019] How can we regularly update a disconnected system (A system without internet connection) - Red Hat Customer Portal

May 02, 2019 | access.redhat.com

How can we regularly update a disconnected system (A system without internet connection)? Solution Verified - Updated August 10 2017 at 12:12 PM -

Environment Issue Resolution

Depending on the environment and circumstances, there are different approaches for updating an offline system.

Approach 1: Red Hat Satellite

For this approach a Red Hat Satellite server is deployed. The Satellite receives the latest packages from Red Hat repositories. Client systems connect to the Satellite and install updates. More details on Red Hat Satellite are available here: https://www.redhat.com/red_hat_network/ . Please also refer to the document Update a Disconnected Red Hat Network Satellite .


Approach 2: Download the updates on a connected system

If a second, similar system exists

then the second system can download applicable errata packages. After downloading the errata packages can be applied to other systems. More documentation: How to update offline RHEL server without network connection to Red Hat Network/Proxy/Satellite? .


Approach 3: Update with new minor release media

DVD media of new RHEL minor releases (i.e. RHEL6.1) are available from RHN. These media images can directly on the system be used for updating, or offered i.e. via http and be used from other systems as a yum repository for updating. For more details refer to:


Approach 4: Manually downloading and installing or updating packages

It is possible to download and install errata packages. For details refer to this document: How do I download security RPMs using the Red Hat Errata Website? .


Approach 5: Create a Local Repository

This approach is applicable to RHEL 5/6/7. With a registered server that is connected to Red Hat repositories, and is the same Major version. The connected system can use reposync to download all the rpms from a specified repository into a local directory. Then using http,nfs,ftp,or targeting a local directory (file://) this can be configured as a repository which yum can use to install packages and resolve dependencies.

How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?

Checking the security erratas :-

To check the security erratas on the system that is not connected to the internet, download the copy the updateinfo.xml.gz file from the identical registered system. The detailed steps can be checked in How to update security Erratas on system which is not connected to internet ? knowledgebase.

Root Cause

Without a connection to the RHN/RHSM the updates have to be transferred over other paths. These are partly hard to implement and automate.

This solution is part of Red Hat's fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form. 8 Comments Log in to comment RZ Community Member 26 points


1 February 2012 3:56 PM Randy Zagar

"Approach 3: update with new minor release media" will not work with RHEL 6. Many packages (over 1500) in the "optional" channels simply are not present on any iso images. There is an open case , but the issue will not be addressed before RHEL 6 Update 4 (and possibly never).

22 May 2014 9:27 AM Umesh Susvirkar

I agree with "Randy Zager" "optional" packages should be available offline along with other channels which are not available in ISO's.

12 July 2016 7:20 PM Adrian Kennedy

Can Approach 5 "additional server, reposync fetching" be applied with RHEL 7 servers?

4 August 2016 8:55 AM Dejan Cugalj

Raw

Yes. You need to:
- subscribe server to RH 
- synchronize repositories with reposync util
- up to 40GB per major release of RHEL.
16 August 2016 10:54 PM Michael White

However, won't I need to stand up another RHEL 7 server in additional to the RHEL 6 server?

8 August 2017 7:01 PM John Castranio

Correct. When using an external server to reposync updates, you will need one system for each Major Version of RHEL that you want to sync packages from.

RHEL 7 does not have access to RHEL 6 repositories just as RHEL 6 can't access RHEL 7 repositories

15 January 2019 10:14 PM BRIAN KEYES

what I am looking for is the instructions on the reposync install AND how to update off line clients

do I have to manually install apache?

16 January 2019 10:50 PM Randy Zagar

You will need: a RH Satellite or RH Proxy server, an internal yum server, and a RHN client for each OS variant (and architecture) you intend to support "offline". E.g. supporting 6Server, 6Client, and 6Workstation for i686 and x86_64 would normally require 6 RHN clients, but only three RHN clients would be necessary for RHEL7, as there's no support for i686 architecture

Yum clients can (according to the docs) use nfs resource paths in the baseurl statement, so apache is not strictly necessary on your yum server, but most people do it that way...

Each RHN client will need: local storage to store packages downloaded via reposync (e.g. "reposync -d -g -l -n -t -p /my/storage --repoid=rhel-i686-workstation-optional-6"). You'll need to run "createrepo" on each repository that gets updated, and you'll need to create an rsync service that provides access to each clients' /my/storage volume

Your internal yum server will need a cron script to run rsync against your RHN clients so you can collect all these software channels in one spot.

You'll also need to create custom yum repo files for your client systems (e.g. redhat-6Workstation.repo) that will point to the correct repositories on your yum server.

I'd recommend you NOT run these cron scripts during normal business hours... your sys-admins will want a stable copy so they can clone things for other offline networks.

If you're clever, you can convince one RHN client system to impersonate the different OS variants, reducing the number of systems you need to deploy.

You'll also most likely want to run "hardlink" on your yum server pretty regularly as there's lots of redundant packages across each OS variant.

[Apr 30, 2019] The end of Scientific Linux [LWN.net]

Apr 25, 2019 | lwn.net

The letter from James Amundson, Head, Fermilab Scientific Computing Division

Scientific Linux is driven by Fermilab's scientific mission and focused
on the changing needs of experimental facilities.

Fermilab is looking ahead to DUNE[1] and other future international
collaborations. One part of this is unifying our computing platform with
collaborating labs and institutions.

Toward that end, we will deploy CentOS 8 in our scientific computing
environments rather than develop Scientific Linux 8. We will
collaborate with CERN and other labs to help make CentOS an even better
platform for high-energy physics computing.

Fermilab will continue to support Scientific Linux 6 and 7 through the
remainder of their respective lifecycles. Thank you to all who have
contributed to Scientific Linux and who continue to do so.

[1] For more information on DUNE please visit https://www.dunescience.org/

James Amundson
Head, Scientific Computing Division

Office of the CIO

[Apr 01, 2019] The Seven Computational Cluster Truths

Inspired by "The seven networking truth by R. Callon, April 1, 1996
Feb 26, 2019 | www.softpanorama.org

Adapted for HPC clusters by Nikolai Bezroukov on Feb 25, 2019

Status of this Memo
This memo provides information for the HPC community. This memo does not specify an standard of any kind, except in the sense that all standards must implicitly follow the fundamental truths. Distribution of this memo is unlimited.
Abstract
This memo documents seven fundamental truths about computational clusters.
Acknowledgements
The truths described in this memo result from extensive study over an extended period of time by many people, some of whom did not intend to contribute to this work. The editor would like to thank the HPC community for helping to refine these truths.
1. Introduction
These truths apply to HPC clusters, and are not limited to TCP/IP, GPFS, scheduler, or any particular component of HPC cluster.
2. The Fundamental Truths
(1) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Most problems in a large computational clusters can never be fully understood by someone who never run a cluster with more then 16, 32 or 64 nodes.

(2) Every problem or upgrade on a large cluster always takes at least twice longer to solve than it seems like it should.

(3) One size never fits all, but complexity increases non-linearly with the size of the cluster. In some areas (storage, networking) the problem grows exponentially with the size of the cluster.
(3a) Supercluster is an attempt to try to solve multiple separate problems via a single complex solution. But its size creates another set of problem which might outweigh the set of problem it intends to solve. .

(3b) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea.

(3c) Large, Fast, Cheap: you can't have all three.

(4) On a large cluster issues are more interconnected with each other and a typical failure often affects larger number of nodes or components and take more effort to resolve
(4a) Superclusters proves that it is always possible to add another level of complexity into each cluster layer, especially at networking layer until only applications that use a single node run well.

(4b) On a supercluster it is easier to move a networking problem around, than it is to solve it.

(4c)You never understand how bad and buggy is your favorite scheduler is until you deploy it on a supercluster.

(4d) If the solution that was put in place for the particular cluster does not work, it will always be proposed later for new cluster under a different name...

(5) Functioning of a large computational cluster is undistinguishable from magic.
(5a) User superstition that "the more cores, the better" is incurable, but the user desire to run their mostly useless models on as many cores as possible can and should be resisted.

(5b) If you do not know what to do with the problem on the supercluster you can always "wave a dead chicken" e.g. perform a ritual operation on crashed software or hardware that most probably will be futile but is nevertheless useful to satisfy "important others" and frustrated users that an appropriate degree of effort has been expended.

(5c) Downtime of the large computational clusters has some mysterious religious ritual quality in it in modest doze increases the respect of the users toward the HPC support team. But only to a certain limit.

(6) "The more cores the better" is a religious truth similar to the belief in Flat Earth during Middle Ages and any attempt to challenge it might lead to burning of the heretic at the stake.

(6a) The number of cores in the cluster has a religious quality and in the eyes of users and management has power almost equal to Divine Spirit. In the stage of acquisition of the hardware it outweighs all other considerations, driving towards the cluster with maximum possible number of cores within the allocated budget Attempt to resist buying for computational nodes faster CPUs with less cores are futile.

(6b) The best way to change your preferred hardware supplier is buy a large computational cluster.

(6c) Users will always routinely abuse the facility by specifying more cores than they actually need for their runs

(7) For all resources, whatever the is the size of your cluster, you always need more.

(7a) Overhead increases exponentially with the size of the cluster until all resources of the support team are consumed by the maintaining the cluster and none can be spend for helping the users.

(7b) Users will always try to run more applications and use more languages that the cluster team can meaningfully support.

(7c) The most pressure on the support team is exerted by the users with less useful for the company and/or most questionable from the scientific standpoint applications.

(7d) The level of ignorance in computer architecture of 99% of users of large computational clusters can't be overestimated.

Security Considerations

This memo raises no security issues. However, security protocols used in the HPC cluster are subject to those truths.

References

The references have been deleted in order to protect the guilty and avoid enriching the lawyers.

[Mar 06, 2019] Can not install RHEL 7 on disk with existing partitions

Notable quotes:
"... In the case of the partition tool, it is too complicated for a common user to use so don't assume that the person using it is an idiot. ..."
May 12, 2014 | access.redhat.com
Can not install RHEL 7 on disk with existing partitions Latest response May 23 2014 at 8:23 AM Can not install RHEL 7 on disk with existing partitions. Menus for old partitions are grayed out. Can not select mount points for existing partitions. Can not delete them and create new ones. Menus say I can but installer says there is a partition error. It shows me a complicated menu that warns be what is about to happen and asks me to accept the changes. It does not seem to accept "accept" as an answer. Booting to the recovery mode on the USB drive and manually deleting all partitions is not a practical solution.

You have to assume that someone who is doing manual partitioning knows what he is doing. Provide a simple tool during install so that the drive can be fully partitioned and formatted. Do not try so hard to make tools that protect use from our own mistakes. In the case of the partition tool, it is too complicated for a common user to use so don't assume that the person using it is an idiot.

The common user does not know hardware and doesn't want to know it. He expects security questions during the install and everything else should be defaults. A competent operator will backup anything before doing a new OS install. He needs the details and doesn't need an installer that tells him he can't do what he has done for 20 years.

13 May 2014 8:40 PM PixelDrift.NET Support Community Leader

Can you give more details?

Are the partitions on the disk Linux partitions you want to re-use? eg. /home? or are they from another OS?

Is the goal to delete the partitions are re-use the space, or to mount the partitions in the new install?

Are you running RHEL 7 Beta or RHEL 7 RC?

22 May 2014 5:24 PM RogerOdle

I did manage to get it to work. It needed a bios partition. It seemed that I had to free it and create a new one. I do not know. I tried many things so I am not clear what ultimately made it work.

My history with installed does back to the 90s

  1. text based, tedious and need to be an expert
  2. gui's makes easier, on reinstall partition labels show where they were mounted before (improvement)
  3. partitions get UUID, gui shows drive numbers instead of mount lavels (worse)
  4. (since Fedora 18?) shows previous fedora install. Wants to do new install on other partitions. Unclear how to tell new install how to use old partitions. New requirement for bios partition not clearly presented. When failure to proceed is because there is no bios partition, no explaination is given, just a whine about there being some error but no help for how to resolve. (much worse).

Wish list:
1. Default configuration should put /var and /home on separate partitions. If installer sees these on separate partitions then it should provide an option to use them.
2. It should not be neccessary to manuall move partitions from an old installation to a new one. I do not know if I am typical but I never put more than one OS on a hard drive (execpt for VMs and that don't count). Hard drives are cheap so I put one and only one OS per drive. When I upgrade, I reformat and replace the root partition which cleans /etc and /usr. I also reformat and clean /var. I never keep anything that I want long term in /var that includes web sites (/var/www) and databases so that I can always discard /var on an update. This prevents propagating problems from one release to the next. On my systems, each release stands on its own. I would like the parition tool to recognize the existing partition scheme and provide a simple choice to reuse it. The only questions I want to answer are whether a particular partition should be reformatted.
3. I do not have much experience with the live iso's so maybe they already do this. The installer has way too intimidating for newbies. I am an engineer so I am used to complexity. The average user and the user's that Fedora needs to connect to long term are overwhelmed by the install process. They need an installer that does not ask any technical questions at all. They just want to plug it in and turn it on and it should just work like their TV just works. Maybe it should be an Entertainment release since these people typically do email, web surfing, write letters, play games and not much else.

[Feb 04, 2019] Red Hat Enterprise Linux 7 8.14. Installation Destination

Notable quotes:
"... Base Environments ..."
"... Kickstart Installations ..."
Jan 30, 2019 | access.redhat.com

8.13. SOFTWARE SELECTION

To specify which packages will be installed, select Software Selection at the Installation Summary screen. The package groups are organized into Base Environments . These environments are pre-defined sets of packages with a specific purpose; for example, the Virtualization Host environment contains a set of software packages needed for running virtual machines on the system. Only one software environment can be selected at installation time. For each environment, there are additional packages available in the form of Add-ons . Add-ons are presented in the right part of the screen and the list of them is refreshed when a new environment is selected. You can select multiple add-ons for your installation environment. A horizontal line separates the list of add-ons into two areas:

Figure 8.16. Example of a Software Selection for a Server Installation The availability of base environments and add-ons depends on the variant of the installation ISO image which you are using as the installation source. For example, the server variant provides environments designed for servers, while the workstation variant has several choices for deployment as a developer workstation, and so on. The installation program does not show which packages are contained in the available environments. To see which packages are contained in a specific environment or add-on, see the repodata/*-comps- variant . architecture .xml file on the Red Hat Enterprise Linux Installation DVD which you are using as the installation source. This file contains a structure describing available environments (marked by the <environment> tag) and add-ons (the <group> tag).

Important The pre-defined environments and add-ons allow you to customize your system, but in a manual installation, there is no way to select individual packages to install. If you are not sure what package should be installed, Red Hat recommends you to select the Minimal Install environment. Minimal install only installs a basic version of Red Hat Enterprise Linux with only a minimal amount of additional software. This will substantially reduce the chance of the system being affected by a vulnerability. After the system finishes installing and you log in for the first time, you can use the Yum package manager to install any additional software you need. For more details on Minimal install , see the Installing the Minimum Amount of Packages Required section of the Red Hat Enterprise Linux 7 Security Guide. Alternatively, automating the installation with a Kickstart file allows for a much higher degree of control over installed packages. You can specify environments, groups and individual packages in the %packages section of the Kickstart file. See Section 26.3.2, "Package Selection" for instructions on selecting packages to install in a Kickstart file, and Chapter 26, Kickstart Installations for general information about automating the installation with Kickstart. Once you have selected an environment and add-ons to be installed, click Done to return to the Installation Summary screen. 8.13.1. Core Network Services All Red Hat Enterprise Linux installations include the following network services:

Some automated processes on your Red Hat Enterprise Linux system use the email service to send reports and messages to the system administrator. By default, the email, logging, and printing services do not accept connections from other systems. You can configure your Red Hat Enterprise Linux system after installation to offer email, file sharing, logging, printing, and remote desktop access services. The SSH service is enabled by default. You can also use NFS to access files on other systems without enabling the NFS sharing service.

[Jan 30, 2019] TCP - IP communication of RHEL 7 is extremely slow compared with RHEL 6 and earlier. - Red Hat Customer Portal

Jan 30, 2019 | access.redhat.com

TCP / IP communication of RHEL 7 is extremely slow compared with RHEL 6 and earlier.
My application transferd from RHEL 5 to RHEL 7.
That application makes TCP / IP communication with another application on the same server.
That communication speed is about twice slower.

Apart from that, we compared it with "ping localhost". RHEL 7 averaged 0.04 ms, RHEL 6 and RHEL 5 average 0.02 ms. RHEL 7 is twice as slow as RHEL 6 or earlier.

The environment is the minimum installation, stop firewalld and postfix, then do "ping localhost".

Why was communication delayed like this?
Or, what is going on late?
Is not it worth it?

RED HAT GURU 7422 Points


24 January 2019 11:09 PM Jamie Bainbridge

Guesses: differences in process scheduling, memory fragmentation, other CPU workload, timing inaccuracy, incorrect test method, firewall behaviour, system performance differences, code difference like the security vulnerability mentioned above, probably much more that I have not thought of.

A good troubleshooting path forward is to identify:

  • the specific behaviour in your application which is different
  • what you expect performance to be
  • what performance measurement you are currently getting

And then look into possible causes. I would start with perf collection during an application run, and possibly strace of the application although that can negatively affect performance too.

There are some more questions to give ideas at Initial investigation for any performance issue .

I see you have "L3 support" through your hardware vendor, possibly you bought RHEL pre-installed on your system, so the hardware vendor's tech support would be the first place to ask. The vendor will contact us if they identify a bug in RHEL.

25 January 2019 12:18 AM R. Hinton. Community Leader

One side note, make sure you really have your dns resolver /etc/resolv.conf set properly. The suggestions above are of course indeed good, but if your dns is not set properly, you'll have another round of slowness. Remember that /etc/resolv.conf is populated generally from the "DNSx" and "DOMAIN" directives found in the active '/etc/sysconfig/netowrk-scripts/ifcfg-XYZ" file. You can find what interfaces are actually active by using ip -o -4 a s which will reveal all IPV4 active interfaces with the interface name in the results at the very far left.

There are instances where if you have a system that is doing a lot of actions that rely on dns, you could make a dns caching server at your location that would assist with lookups and cache relevant things for your system.

Again, the other answers above are very useful, on spot, but if your /etc/resolv.conf is off, or not optimal, it could cause issue.

Another thing to review, and yes, it is exhaustive, the Red Hat tuning guide would be a good reference to double-check.

One method to test network bandwidth and latency performance is here .

I have not fully vetted this article where someone did some additional tuning and it would be good to validate what is in that article for legitimacy, and make backups of any configurations before making changes.

One last thing, using the rpm iftop can give you an idea of what systems are hitting your server, or visa versa.

Regards,

RJ

25 January 2019 2:05 AM Jamie Bainbridge

and yes, it is exhaustive, the Red Hat tuning guide

For reference, the Network Performance Tuning Guide PDF is only the original publish. We have updated the knowledgebase article a couple of times since then:

I have not fully vetted this article

Using tuned is a good idea... if the tuning profile matches your use case. Users are encouraged to think of the shipped profiles as just a starting point and develop their own more customised tuning profile.

An overview of the default profiles is at: https://access.redhat.com/solutions/369093

[Jan 30, 2019] RHEL 8.0 - Beta release available

Jan 30, 2019 | access.redhat.com

PixelDrift.NET Support Community Leader

Latest videos for RHEL 8 Beta on YouTube are nice polished quick summaries of some of the less noticed/celebrated new features: https://www.youtube.com/user/RedHatVideos/search?query=rhel+8+beta

Seems cockpit is becoming a pretty serious point of focus.

The image builder capability is pretty tidy.

Christian Labisch Community Leader, 23 January 2019 10:03 AM

Hi ! :)

I agree, Cockpit is a great tool to manage the servers from a GUI. I'm using Cockpit for quite a long time and everything works smoothly without issues. The only thing that needs enhancements, is the virtual machines plugin, right because it's intended to be the replacement for the deprecated Virtual Machine Manager. VMM provides a bunch of useful configuration options, it is the best (IMHO) GUI tool to manage virtual machines.

Regards,
Christian

Ernest Beinrohr

Is it possible to re-enable iptables on RHEL8? We use shorewall on many machines and it won't support nftables.

Also, no gnu screen ? At least RH could provide tmux config and alias to screen with the same keyboard shortcuts.

ir. Jan Gerrit Kootstra Community Leader GURU 6659 Points 30 January 2019 10:00 AM

iptables versus nftables, for those who need iptables, please open a Request for Extension support case.

[Jan 30, 2019] RHEL 7 RHCSA Notes (Cheat Sheets) The Geek Diary

Jan 30, 2019 | www.thegeekdiary.com

Red Hat Certified System Administrator better known as RHCSA exam is one of the well-known certification exam in Linux world. I've tried to write together notes that I used in my preparation of RHEL 7 RHCSA. Remember that, these are not explanatory notes, but a quick cheat sheet. The post includes links to all exam objectives for the RHCSA exam. Understand and use essential tools

Operate running systems Configure local storage Create and configure file systems Deploy, configure, and maintain systems Manage users and groups Manage security

Filed Under: CentOS/RHEL 7 , Linux , RHCSA notes

Some more articles you might also be interested in
  1. CentOS / RHEL : How to remove used Physical Volume(PV) from Volume Group (VG) in LVM
  2. How systemd-tmpfiles cleans up /tmp/ or /var/tmp (replacement of tmpwatch) in CentOS / RHEL 7
  3. How to Configure a script to execute during system shutdown and startup in CentOS/RHEL/Fedora
  4. Downloading a Specific Version of Package and Its Dependencies from Repository for Offline Installation Using YUM
  5. CentOS / RHEL 7 : Beginners guide to systemd
  6. CentOS / RHEL 7 : systemctl replacements of legacy commands service and chkconfig
  7. Linux OS Service 'microcode_ctl'
  8. CentOS / RHEL 7 : How to disable Transparent Huge pages (THP)
  9. Linux OS Service 'rpcgssd'
  10. How to monitor the Mounting/Umounting of Mount Points Using Auditd on CentOS/RHEL 6,7

[Jan 29, 2019] RHEL7 is a fine OS, the only thing it s missing is a really good init system.

Highly recommended!
Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Notable quotes:
"... We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile. ..."
"... I think we should call systemd the Master Control Program since it seems to like making other programs functions its own. ..."
"... RHEL7 is a fine OS, the only thing it's missing is a really good init system. ..."
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Jan 26, 2019] Systemd developers don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device driver

Jan 26, 2019 | blog.erratasec.com

John Morris said...

They don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device drivers. No, they want to replace the GNU/X in the traditional Linux/GNU/X arrangement. All of the command line tools, up to and including bash are to go, replaced with the more Windows like tools most of the systemd developers grew up on, while X and the desktop environments all get rubbished for Wayland and GNOME3.

And I would wish them luck, the world could use more diversity in operating systems. So long as they stayed the hell over at RedHat and did their grand experiment and I could still find a Linux/GNU/X distribution to run. But they had to be borg and insist that all must bend the knee and to that I say HELL NO!

[Jan 26, 2019] The coming enhancement to systemd

Jan 26, 2019 | blog.erratasec.com

Siegfried Kiermayer said...

I'm waiting for pulse audio being included in systemd to have proper a boot sound :D

[Jan 25, 2019] Some systemd problems that arise in reasonably complex datacenter environment

May 10, 2018 | theregister.co.uk
Thursday 10th May 2018 16:34 GMT Nate Amsden

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there.

If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

[Jan 25, 2019] SystemD vs Solaris 10 SMF

"Shadow files" approach of Solaris 10, where additional functions of init are controlled by XML script that exist in a separate directory with the same names as init scripts can be improved but architecturally it is much cleaner then systemd approach.
Notable quotes:
"... Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1. ..."
"... Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. ..."
"... AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all. ..."
Jan 25, 2019 | theregister.co.uk

Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel.

This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX).

The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves.

Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

Re: Ahhh SystemD

Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions.

Afaics, systemd is a power grab by Red Hat and an ego trip for it's primary developer.

Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

starbase7, Thursday 10th May 2018 04:36 GMT

SMF?

As an older timer (on my way but not there yet), I never cared for the init.d startup and I dislike the systemd monolithic architecture.

What I do like is Solaris SMF and wish Linux would have adopted a method such as or similar to that. I still think SMF was/is a great comprise to the init.d method or systemd manor.

I used SMF professionally, but now I have moved on with Linux professionally as Solaris is, well, dead. I only get to enjoy SMF on my home systems, and savor it. I'm trying to like Linux over all these years, but this systemd thing is a real big road block for me to get enthusiastic.

I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd. Sigh.

Anonymous Coward, Thursday 10th May 2018 04:53 GMT

Re: SMF?

You're not alone in liking SMF and Solaris.

AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all.

RedHat seem to call the shots these days as to what a Linux distro has. I personally have mixed opinions on this; I think the vast anarchy of Linux is a bad thing for Linux adoption ("this is the year of the Linux desktop" don't make me laugh), and Linux would benefit from a significant culling of the vast number of distros out there. However if that did happen and all that was left was something controlled by RedHat, that would be a bad situation.

Steve Davies, Thursday 10th May 2018 07:30 GMT 3

Re: SMF?
Remember who 'owns' SMF... namely Oracle. They may well have made it impossible for anyone to adopt. That stance is not unknown now is it...?

As for systemd, I have bit my teeth and learned to tolerate it. I'll never be as comfortable with it as I was with the old init system but I did start running into issues especially with shutdown syncing with it on some complex systems.

Still not sure if systemd is the right way forward even after four years.

Daggerchild, Thursday 10th May 2018 14:30 GMT

Re: SMF?
SMF should be good, and yet they released it before they'd documented it. Strange priorities...

And XML is *not* a config file format you should let humans at. Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

And someone correct me, but it looks like there are SMF properties of a running service that can only be modified/added by editing the file, reloading *and* restarting the service. A metadata and state/dependency tracking system shouldn't require you to shut down the priority service it's meant to be ensuring... Again, strange priorities...

12 1 Reply
Friday 11th May 2018 07:55 GMTonefangSilver badge
Reply Icon
FAIL
Re: SMF?
"XML is *not* a config file format you should let humans at"

XML is a format you shouldn't let computers at, it was designed to be human readable and writable. It fails totally.

5 1 Reply
Friday 6th July 2018 12:27 GMTHans 1Silver badge
Reply Icon
Re: SMF?
Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

Hm, you do know the grammar is in a dtd ? Yes, XML takes time to learn, but very powerful once mastered.

0 1 Reply
Thursday 10th May 2018 13:24 GMTCrazyOldCatManSilver badge
Reply Icon
Re: SMF?
I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd

Several reasons:

A lot of other distros use Redhat (or Fedora) as their base and then customise it.

A lot of other distros include things dependant on systemd (Gnome being the one with biggest dependencies - you can just about to get it to run without systemd but it's a pain and every update will break your fixes).

Redhat has a lot of clout.

6 3

[Jan 21, 2019] Low Cost Academic Solutions Now Available From Red Hat

Dec 03, 2003 | www.redhat.com

Students, faculty, and staff members of qualified institutions can access Red Hat Academic solutions as either individual subscriptions or through a site program. Two types of subscriptions are available:

Academic institutions may wish to enable an entire student body, group of departments, or entire system of schools with a Site Subscription. A Basic Package, priced at $2500 per year, includes unlimited service subscriptions to Red Hat Enterprise Linux WS Academic Edition for all systems personally owned by students, faculty and staff. This package also includes a Red Hat Network Proxy Server Academic Edition and Red Hat Network management entitlements to enable institutions to simplify and standardize their support of all systems. Both of the academic editions are based on Red Hat Enterprise Linux technology, offering the same enterprise-class stability, maintenance, and performance as Red Hat Enterprise Linux. These academic solutions are available immediately in the United States and will roll out in other regional markets shortly.

For educational infrastructures requiring support, Red Hat recommends Red Hat Enterprise Linux Standard and Premium editions which include bundled support to meet the needs of the most demanding production environments. For advanced technical curriculum, the Red Hat-sponsored Fedora Project offers a flexible base of technology that can be accessed without any cost for courses that focus on open source development methodology and Linux technology evolution.

For more information, please call 866-2-REDHAT ext. 45787 or visit /solutions/industry/education/ .

[Jan 08, 2019] Bind DNS threw a (network unreachable) error CentOS

Jan 08, 2019 | www.reddit.com

submitted 11 days ago by


mr-bope

Bind 9 on my CentOS 7.6 machine threw this error:
error (network unreachable) resolving './DNSKEY/IN': 2001:7fe::53#53
error (network unreachable) resolving './NS/IN': 2001:7fe::53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:a8::e#53
error (network unreachable) resolving './NS/IN': 2001:500:a8::e#53
error (FORMERR) resolving './NS/IN': 198.97.190.53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:dc3::35#53
error (network unreachable) resolving './NS/IN': 2001:dc3::35#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:2d::d#53
error (network unreachable) resolving './NS/IN': 2001:500:2d::d#53
managed-keys-zone: Unable to fetch DNSKEY set '.': failure

What does it mean? Can it be fixed?

And is it at all related with DNSSEC cause I cannot seem to get it working whatsoever.

cryan7755 1 point 2 points 3 points 11 days ago (1 child)
Looks like failure to reach ipv6 addressed NS servers. If you don't utilize ipv6 on your network then this should be expected.
knobbysideup 1 point 2 points 3 points 11 days ago (0 children)
Can be dealt with by adding
#/etc/sysconfig/named
OPTIONS="-4"

[Dec 16, 2018] Red Hat Enterprise Linux 7.6 Released

Dec 16, 2018 | linux.slashdot.org

ArchieBunker ( 132337 ) , Tuesday October 30, 2018 @07:00PM ( #57565233 ) Homepage

New features include ( Score: 5 , Funny)

All of /etc has been moved to a flat binary database now called REGISTRY.DAT

A new configuration tool known as regeditor authored by Poettering himself (accidental deletion of /home only happens in rare occurrences)

In kernel naughty words filter

systemd now includes a virtual userland previously known as busybox

[Dec 11, 2018] John Taylor Gatto s book, The Underground History of American Education, lays out the sad fact of western education ; which has nothing to do with education; but rather, an indoctrination for inclusion in society as a passive participant. Docility is paramount in members of U.S. society so as to maintain the status quo

Highly recommended!
Creation of docility is what neoliberal education is about. Too specialized slots, as if people can't learn something new. Look at requirements for the jobs at monster or elsewhere: they are so specific that only people with previous exactly same job expertise can apply. Especially oputragious are requernets posted by requetng firm. There is something really Orvallian in them. That puts people into medieval "slots" from which it is difficult to escape.
I saw recently the following requirements for a sysadmin job: "Working knowledge of: Perl, JavaScript, PowerShell, BASH Script, XML, NodeJS, Python, Git, Cloud Technologies: ( AWS, Azure, GCP), Microsoft Active Directory, LDAP, SQL Server, Structured Query Language (SQL), HTML, Windows OS, RedHat(Linux), SaltStack, Some experience in Application Quality Testing."
When I see such job posting i think that this is just a covert for H1B hire: there is no such person on the planet who has "working knowledge" of all those (mostly pretty complex) technologies. It is clearly designed to block potential candidates from applying.
Neoliberalism looks like a cancer for the society... Unable to provide meaningful employment for people. Or at least look surprisingly close to one. Malignant growth.
Dec 11, 2018 | www.ianwelsh.net

[Dec 11, 2018] Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up.

Dec 11, 2018 | www.ianwelsh.net

S Brennan permalink April 24, 2016

My grandfather, in the early 60's could board a 707 in New York and arrive in LA in far less time than I can today. And no, I am not counting 4 hour layovers with the long waits to be "screened", the jets were 50-70 knots faster, back then your time was worth more, today less.

Not counting longer hours AT WORK, we spend far more time commuting making for much longer work days, back then your time was worth more, today less!

Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up. Think about the almost perfect Google Maps driver interface being redesigned by people who take private buses to work. Way back in the '90's your time was worth more than today!

Life is all the "time" YOU will ever have and if we let the elite do so, they will suck every bit of it out of you.

[Nov 22, 2018] Sorry, Linux. Kubernetes is now the OS that matters InfoWorld

That's a very primitive thinking. If RHEL is royally screwed, like is the case with RHEL7, that affects Kubernetes -- it does not exists outside the OS
Nov 22, 2018 | www.infoworld.com
We now live in a Kubernetes world

Perhaps Redmonk analyst Stephen O'Grady said it best : "If there was any question in the wake of IBM's $34 billion acquisition of Red Hat and its Kubernetes-based OpenShift offering that it's Kubernetes's world and we're all just living in it, those [questions] should be over." There has been nearly $60 billion in open source M&A in 2018, but most of it revolves around Kubernetes.

Red Hat, for its part, has long been (rightly) labeled the enterprise Linux standard, but IBM didn't pay for Red Hat Enterprise Linux. Not really.

[Nov 21, 2018] Red Hat Enterprise Linux 8 Hits Beta With Integrated Container Features

Nov 21, 2018 | www.eweek.com

Among the biggest changes in the last four years across the compute landscape has been the emergence of containers and microservices as being a primary paradigm for application deployment. In RHEL 8, Red Hat is including multiple container tools that it has been developing and proving out in the open-source community, including Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers).

Systems management is also getting a boost in RHEL 8 with the Composer features that enable organizations to build and deploy custom RHEL images. Management of RHEL is further enhanced via the new Red Hat Enterprise Linux Web Console, which enables administrators to manage bare metal, virtual, local and remote Linux servers.

[Nov 18, 2018] Systemd killing screen and tmux

Nov 18, 2018 | theregister.co.uk

fobobob , Thursday 10th May 2018 18:00 GMT

Might just be a Debian thing as I haven't looked into it, but I have enough suspicion towards systemd that I find it worth mentioning. Until fairly recently (in terms of Debian releases), the default configuration was to murder a user's processes when they log out. This includes things such as screen and tmux, and I seem to recall it also murdering disowned and NOHUPed processes as well.
Tim99 , Thursday 10th May 2018 06:26 GMT
How can we make money?

A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-

Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.

Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.

Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.

Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.

Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.

ds6 , 6 months
Re: How can we make money?

This is scarily possible and undeserving of the troll icon.

Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

DougS , Thursday 10th May 2018 07:30 GMT
Init did need fixing

But replacing it with systemd is akin to "fixing" the restrictions of travel by bicycle (limited speed and range, ending up sweaty at your destination, dangerous in heavy traffic) by replacing it with an Apache helicopter gunship that has a whole new set of restrictions (need for expensive fuel, noisy and pisses off the neighbors, need a crew of trained mechanics to keep it running, local army base might see you as a threat and shoot missiles at you)

Too bad we didn't get the equivalent of a bicycle with an electric motor, or perhaps a moped.

-tim , Thursday 10th May 2018 07:33 GMT
Those who do not understand Unix are condemned to reinvent it, poorly.

"It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

Poettering and Red Hat,

Please learn about "Process Groups"

Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

Herby , Thursday 10th May 2018 07:42 GMT
Process 1 IS complicated.

That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

Perhaps a nice book could be written WITH example to show what is going on.

Now let's see does audio come before or after networking (or at the same time)?

Chronos , Thursday 10th May 2018 09:12 GMT
Logging

If they removed logging from the systemd core and went back to good ol' plaintext syslog[-ng], I'd have very little bad to say about Lennart's monolithic pet project. Indeed, I much prefer writing unit files than buggering about getting rcorder right in the old SysV init.

Now, if someone wanted to nuke pulseaudio from orbit and do multiplexing in the kernel a la FreeBSD, I'll chip in with a contribution to the warhead fund. Needing a userland daemon just to pipe audio to a device is most certainly a solution in search of a problem.

Tinslave_the_Barelegged , Thursday 10th May 2018 11:29 GMT
Re: Logging

> If they removed logging from the systemd core

And time syncing

And name resolution

And disk mounting

And logging in

...and...

[Nov 18, 2018] From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

Nov 18, 2018 | theregister.co.uk

tekHedd , Thursday 10th May 2018 15:28 GMT

Not UNIX-like? SNU!

From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

[Nov 18, 2018] So in all reality, systemd is an answer to a problem that nobody who are administring servers ever had.

Nov 18, 2018 | theregister.co.uk

jake , Thursday 10th May 2018 20:23 GMT

Re: Bah!

Nice rant. Kinda.

However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

[0] My spall chucker insists that the word should be "testicles". Tempting ...

[Nov 18, 2018] You love systems -- you just don't know it yet, wink Red Hat bods

Nov 18, 2018 | theregister.co.uk

sisk , Thursday 10th May 2018 21:17 GMT

It's a pretty polarizing debate: either you see Systemd as a modern, clean, and coherent management toolkit

Very, very few Linux users see it that way.

or an unnecessary burden running roughshod over the engineering maxim: if it ain't broke, don't fix it.

Seen as such by 90% of Linux users because it demonstrably is.

Truthfully Systemd is flawed at a deeply fundamental level. While there are a very few things it can do that init couldn't - the killing off processes owned by a service mentioned as an example in this article is handled just fine by a well written init script - the tradeoffs just aren't worth it. For example: fscking BINARY LOGS. Even if all of Systemd's numerous other problems were fixed that one would keep it forever on my list of things to avoid if at all possible, and the fact that the Systemd team thought it a good idea to make the logs binary shows some very troubling flaws in their thinking at a very fundamental level.

Dazed and Confused , Thursday 10th May 2018 21:43 GMT
Re: fscking BINARY LOGS.

And config too

When it comes to logs and config file if you can't grep it then it doesn't belong on Linux/Unix

Nate Amsden , Thursday 10th May 2018 23:51 GMT
Re: fscking BINARY LOGS.

WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

Tomato42 , Saturday 12th May 2018 08:26 GMT
Re: fscking BINARY LOGS.

> I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

"I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

HieronymusBloggs , Saturday 12th May 2018 18:17 GMT
Re: fscking BINARY LOGS.

"systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight"

Journald can't be switched off, only redirected to /dev/null. It still generates binary log data (which has caused me at least one system hang due to the absurd amount of data it was generating on a system that was otherwise functioning correctly) and consumes system resources. That isn't my idea of "works just fine".

""I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?"

Nice straw man. Most of the complaints I've seen have been from experienced people who do know what they're talking about.

sisk , Tuesday 15th May 2018 20:22 GMT
Re: fscking BINARY LOGS.

"I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

I have had the displeasure of dealing with journald and it is every bit as bad as everyone says and worse.

systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

Yeah, I've tried that. It caused problems. It wasn't a viable option.

Anonymous Coward , Thursday 10th May 2018 22:30 GMT
Parking U$5bn in redhad for a few months will fix this...

So it's now been 4 years since they first tried to force that shoddy desk-top init system into our servers? And yet they still feel compelled to tell everyone, look it really isn't that terrible. That should tell you something. Unless you are tone death like Redhat. Surprised people didn't start walking out when Poettering outlined his plans for the next round of systemD power grabs...

Anyway the only way this farce will end is with shareholder activism. Some hedge fund to buy 10-15 percent of redhat (about the amount you need to make life difficult for management) and force them to sack that "stable genius" Poettering. So market cap is 30bn today. Anyone with 5bn spare to park for a few months wanna step forward and do some good?

cjcox , Thursday 10th May 2018 22:33 GMT
He's a pain

Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

Daggerchild , Friday 11th May 2018 08:27 GMT
Spherical wheel is superior.

@T42

Now, you see, you just summed up the whole problem. Like systemd's author, you think you know better than the admin how to run his machine, without knowing, or caring to ask, what he's trying to achieve. Nobody ever runs a computer, to achieve running systemd do they.

Tomato42 , Saturday 12th May 2018 09:05 GMT
Re: Spherical wheel is superior.

I don't claim I know better, but I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running, run file left-over but process dead, service restart – let alone the more obscure ones, like application double forking when it shouldn't (even when that was the failure mode of the application the script was provided with). So maybe, just maybe, you haven't experienced everything there is to experience, so your opinion is subjective?

Yes, the sides of the discussion should talk more, but this applies to both sides. "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion". So is quoting well known and long discussed (and disproven) points. (and then downvoting people into oblivion for daring to point this things out).

now in the real world, people that have to deal with init systems on daily basis, as distribution maintainers, by large, have chosen to switch their distributions to systemd, so the whole situation I can sum up one way:

"the dogs may bark, but the caravan moves on"

Kabukiwookie , Monday 14th May 2018 00:14 GMT
Re: Spherical wheel is superior.

I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

This only shows that you don't have much real life experience managing lots of hosts.

like application double forking when it shouldn't

If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

"La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

now in the real world, people that have to deal with init systems on daily basis

Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

"the dogs may bark, but the caravan moves on"

Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

Daggerchild , Monday 14th May 2018 14:10 GMT
Re: Spherical wheel is superior.

"I never saw ... run file left-over but process dead, service restart ..."

Seriously? I wrote one last week! You use an OS atomic lock on the pidfile and exec the service if the lock succeeded. The lock dies with the process. It's a very small shellscript.

I shot a systemd controlled service. Systemd put it into error state and wouldn't restart it unless I used the right runes. That is functionally identical to the thing you just complained about.

"application double forking when it shouldn't"

I'm going to have to guess what that means, and then point you at DJB's daemontools. You leave a FD open in the child. They can fork all they like. You'll still track when the last dies as the FD will cause an event on final close.

"So maybe, just maybe, you haven't experienced everything there is to experience"

You realise that's the conspiracy theorist argument "You don't know everything, therefore I am right". Doubt is never proof of anything.

"La, la, la, sysv is working fine" is not what you can call "participating in discussion".

Well, no.. it's called evidence. Evidence that things are already working fine, thanks. Evidence that the need for discussion has not been displayed. Would you like a discussion about the Earth being flat? Why not? Are you refusing to engage in a constructive discussion? How obstructive!

"now in the real world..."

In the *real* world people run Windows and Android, so you may want to rethink the "we outnumber you, so we must be right" angle. You're claiming an awful lot of highground you don't seem to actually know your way around, while trying to wield arguments you don't want to face yourself...

"(and then downvoting people into oblivion for daring to point this things out)"

It's not some denialist conspiracy to suppress your "daring" Truth - you genuinely deserve those downvotes.

Anonymous Coward , Friday 11th May 2018 17:27 GMT
I have no idea how or why systemd ended up on servers. Laptops I can see the appeal for "this is the year of the linux desktop" - for when you want your rebooted machine to just be there as fast as possible (or fail mysteriously as fast as possible). Servers, on the other hand, which take in the order of 10+ minutes to get through POST, initialising whatever LOM, disk controllers, and whatever exotica hardware you may also have connected, I don't see a benefit in Linux starting (or failing to start) a wee bit more quickly. You're only going to reboot those beasts when absolutely necessary. And it should boot the same as it booted last time. PID1 should be as simple as possible.

I only use CentOS these days for FreeIPA but now I'm questioning my life decisions even here. That Debian adopted systemd too is a real shame. It's actually put me off the whole game. Time spent learning systemd is time that could have been spent doing something useful that won't end up randomly breaking with a "will not fix" response.

Systemd should be taken out back and put out of our misery.

[Nov 18, 2018] Just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Notable quotes:
"... Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option). ..."
"... I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far). ..."
"... If systemd is a solution to any set of problems, I'd love to have those problems back! ..."
Nov 18, 2018 | theregister.co.uk

Nate Amsden , Thursday 10th May 2018 16:34 GMT

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there. If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

GrumpenKraut , Thursday 10th May 2018 17:52 GMT
Re: as a linux user for 22 users

Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

If systemd is a solution to any set of problems, I'd love to have those problems back!

[Nov 18, 2018] SystemD is just a symptom of this regression of Red Hat into money making machine

Nov 18, 2018 | theregister.co.uk

Will Godfrey , Thursday 10th May 2018 16:30 GMT

Business Model

Red Hat have definitely taken a lurch to the dark side in recent years. It seems to be the way businesses go.

They start off providing a service to customers.

As they grow the customers become users.

Once they reach a certain point the users become consumers, and at this point it is the 'consumers' that provide a service for the business.

SystemD is just a symptom of this regression.

[Nov 18, 2018] Fudging the start-up and restoring eth0

Truth be told boisdevname abomination is from Dell
Nov 18, 2018 | theregister.co.uk

The Electron , Thursday 10th May 2018 12:05 GMT

Fudging the start-up and restoring eth0

I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it! I have had to kickstart many of my CentOS 7 builds to disable IPv6 (NFS complains bitterly), kill the incredibly annoying 'biosdevname' that turns sensible eth0/eth1 into some daftly named nonsense, replace Gnome 3 (shudder) with MATE, and fudge start-up processes. In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

Jay 2 , Thursday 10th May 2018 15:02 GMT
Re: Predictable names

I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX.

However on (RHEL?)/CentOS 7 I've found that if you build a server like that, and then try to renam/swap the interfaces it will refuse point blank to allow you to swap the interfaces round so that something else can be eth0. In the end we just gave up and renamed everything lanX instead which it was quite happy with.

HieronymusBloggs , Thursday 10th May 2018 16:23 GMT
Re: Predictable names

"I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX."

I'm using this on my Debian 9 systems. IIRC the option to do so will be removed in Debian 10.

Dazed and Confused , Thursday 10th May 2018 19:21 GMT
Re: Predictable names

I can't remember if it's HPE or Dell (or both)

It's Dell. I got the impression that much of this work had been done, at least, in conjunction with Dell.

[Nov 18, 2018] The beatings will continue until morale improves.

Nov 18, 2018 | theregister.co.uk

Doctor Syntax , Thursday 10th May 2018 10:26 GMT

"The more people learn about it, the more they like it."

Translation: We define those who don't like it as not have learned enough about it.

ROC , Friday 11th May 2018 17:32 GMT
Alternate translation:

The beatings will continue until morale improves.

[Nov 18, 2018] I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life

Nov 18, 2018 | theregister.co.uk

AJ MacLeod , Thursday 10th May 2018 13:51 GMT

@Sheepykins

I'm not really bothered about whether init was perfect from the beginning - for as long as I've been using Linux (20 years) until now, I have never known the init system to be the cause of major issues. Since in my experience it's not been seriously broken for two decades, why throw it out now for something that is orders of magnitude more complex and ridiculously overreaching?

Like many here I bet, I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life - but this is also the first time I can recall ever having serious unpredictable issues with startup and shutdown on Linux servers.


stiine, Thursday 10th May 2018 15:38 GMT

sysV init

I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

/rant

[Nov 18, 2018] systems helps with mounting NSF4 filesystems

Nov 18, 2018 | theregister.co.uk

Chronos , Thursday 10th May 2018 13:32 GMT

Re: Logging

And disk mounting

Well, I am compelled to agree with most everything you wrote except one niche area that systemd does better: Remember putzing about with the amd? One line in fstab:

nasbox:/srv/set0 /nas nfs4 _netdev,noauto,nolock,x-systemd.automount,x-systemd.idle-timeout=1min 0 0

Bloody thing only works and nobody's system comes grinding to a halt every time some essential maintenance is done on the NAS.

Candour compels me to admit surprise that it worked as advertised, though.

DCFusor , Thursday 10th May 2018 13:58 GMT

Re: Logging

No worries, as has happened with every workaround to make systemD simply mount cifs or NFS at boot, yours will fail as soon as the next change happens, yet it will remain on the 'net to be tried over and over as have all the other "fixes" for Poettering's arrogant breakages.

The last one I heard from him on this was "don't mount shares at boot, it's not reliable WONTFIX".

Which is why we're all bitching.

Break my stuff.

Web shows workaround.

Break workaround without fixing the original issue, really.

Never ensure one place for current dox on what works now.

Repeat above endlessly.

Fine if all you do is spin up endless identical instances in some cloud (EG a big chunk of RH customers - but not Debian for example). If like me you have 20+ machines customized to purpose...for which one workaround works on some but not others, and every new release of systemD seems to break something new that has to be tracked down and fixed, it's not acceptable - it's actually making proprietary solutions look more cost effective and less blood pressure raising.

The old init scripts worked once you got them right, and stayed working. A new distro release didn't break them, nor did a systemD update (because there wasn't one). This feels more like sabotage.

[Nov 18, 2018] Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error

Nov 18, 2018 | theregister.co.uk

Dabbb , Thursday 10th May 2018 10:16 GMT

Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

[Nov 18, 2018] You love Systemd you just don't know it yet, wink Red Hat bods

Nov 18, 2018 | theregister.co.uk

Anonymous Coward , Thursday 10th May 2018 02:58 GMT

Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

"And perhaps, in the process, you may warm up a bit more to the tool"

Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd?

I love the fact that they aren't trying to address the huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever expanding sprawl.

The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security problems. In short were stuck cleaning up his mess, and the consequences of his security blunders

A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process like the BSD world.

Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts to fix them. The flame wars not going away till he does.

asdf , Thursday 10th May 2018 23:38 GMT
Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

whitepines , Thursday 10th May 2018 03:47 GMT
Raise your hand if you've been completely locked out of a server or laptop (as in, break out the recovery media and settle down, it'll be a while) because systemd:

1.) Couldn't raise a network interface

2.) Farted and forgot the UUID for a disk, then refused to give a recovery shell

3.) Decided an unimportant service (e.g. CUPS or avahi) was too critical to start before giving a login over SSH or locally, then that service stalls forever

4.) Decided that no, you will not be network booting your server today. No way to recover and no debug information, just an interminable hang as it raises wrong network interfaces and waits for DHCP addresses that will never come.

And lest the fun be restricted to startup, on shutdown systemd can quite happily hang forever doing things like stopping nonessential services, *with no timeout and no way to interrupt*. Then you have to Magic Sysreq the machine, except that sometimes secure servers don't have that ability, at least not remotely. Cue data loss and general excitement.

And that's not even going into the fact that you need to *reboot the machine* to patch the *network enabled* and highly privileged systemd, or that it seems to have the attack surface of Jupiter.

Upstart was better than this. SysV was better than this. Mac is better than this. Windows is better than this.

Uggh.

Daggerchild , Thursday 10th May 2018 11:39 GMT
Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

Tridac , Thursday 10th May 2018 11:53 GMT
Re: Ahhh SystemD

Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. Afaics, systemd is a power grab by red hat and an ego trip for it's primary developer. Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

[Nov 17, 2018] RHEL 8 Beta arrives with application streams and more Network World

Nov 17, 2018 | www.networkworld.com

What is changing in networking?

More efficient networking is provided in containers through IPVLAN, which connects containers nested in virtual machines to networking hosts with minimal impact on throughput and latency.

RHEL 8 Beta also provides a new TCP/IP stack that provides bandwidth and round-trip propagation time (BBR) congestion control. BBR is a fairly new TCP delay-controlled TCP flow control algorithm from Google. These changes will lead to higher performance network connections, minimized latency, and less packet loss for all internet services (e.g., streaming video and hosted storage).

[Nov 09, 2018] OpenStack is overkill for Docker

Notable quotes:
"... OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users. ..."
Nov 09, 2018 | www.techrepublic.com

javascript:void(0)

Both OpenStack and Docker were conceived to make IT more agile. OpenStack has strived to do this by turning hitherto static IT resources into elastic infrastructure, whereas Docker has reached for this goal by harmonizing development, test, and production resources, as Red Hat's Neil Levine suggests .

But while Docker adoption has soared, OpenStack is still largely stuck in neutral. OpenStack is kept relevant by so many wanting to believe its promise, but never hitting its stride due to a host of factors , including complexity.

And yet Docker could be just the thing to turn OpenStack's popularity into productivity. Whether a Docker-plus-OpenStack pairing is right for your enterprise largely depends on the kind of capacity your enterprise hopes to deliver. If simply Docker, OpenStack is probably overkill.

An open source approach to delivering virtual machines

OpenStack is an operational model for delivering virtualized compute capacity.

Sure, some give it a more grandiose definition ("OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds"), but if we ignore secondary services like Cinder, Heat, and Magnum, for example, OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users.

That's it.

Not that this is a small thing. After all, without OpenStack, the hypervisor sits idle, lonesome on a single computer, with no way to expose that capacity programmatically (or otherwise) to users.

Before cloudy systems like OpenStack or Amazon's EC2, users would typically file a help ticket with IT. An IT admin, in turn, would use a GUI or command line to create a VM, and then share the credentials with the user.

Systems like OpenStack significantly streamline this process, enabling IT to programmatically deliver capacity to users. That's a big deal.

Docker peanut butter, meet OpenStack jelly

Docker, the darling of the containers world, is similar to the VM in the IaaS picture painted above.

A Docker host is really the unit of compute capacity that users need, and not the container itself. Docker addresses what you do with a host once you've got it, but it doesn't really help you get the host in the first place.

A Docker machine provides a client-side tool that lets you request Docker hosts from an IaaS provider (like EC2 or OpenStack or vSphere), but it's far from a complete solution. In part, this stems from the fact that Docker doesn't have a tenancy model.

With a hypervisor, each VM is a tenant. But in Docker, the Docker host is a tenant. You typically don't want multiple users sharing a Docker host because then they see each others' containers. So typically an enterprise will layer a cloud system underneath Docker to add tenancy. This yields a stack that looks like: hardware > hypervisor > Docker host > container.

A common approach today would be to take OpenStack and use it as the enterprise platform to deliver capacity on demand to users. In other words, users rely on OpenStack to request a Docker host, and then they use Docker to run containers in their Docker host.

So far, so good.

If all you need is Docker...

Things get more complicated when we start parsing what capacity needs delivering.

When an enterprise wants to use Docker, they need to get Docker hosts from a data center. OpenStack can do that, and it can do it alongside delivering all sorts of other capacity to the various teams within the enterprise.

But if all an enterprise IT team needs is Docker containers delivered, then OpenStack -- or a similar orchestration tool -- may be overkill, as VMware executive Jared Rosoff told me.

For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them, and then use Docker to create containers in those hosts.

Google has a vision for something like this with its Google Container Engine . Amazon has something similar in its EC2 Container Service . These are both API's that developers can use to provision some Docker-compatible capacity from their data center.

As for Docker, the company behind Docker, the technology, it seems to have punted on this problem. focusing instead on what happens on the host itself.

While we probably don't need to build up a big OpenStack cloud simply to manage Docker instances, it's worth asking what OpenStack should look like if what we wanted to deliver was only Docker hosts, and not VMs.

Again, we see Google and Amazon tackling the problem, but when will OpenStack, or one of its supporters, do the same? The obvious candidate would be VMware, given its longstanding dominance of tooling around virtualization. But the company that solves this problem first, and in a way that comforts traditional IT with familiar interfaces yet pulls them into a cloudy future, will win, and win big.

[Nov 03, 2018] Is Red Hat IBM's 'Hail Mary' pass

Notable quotes:
"... if those employees become unhappy, they can effectively go anywhere they want. ..."
"... IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing. ..."
"... I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words. ..."
"... Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right. ..."
"... Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years. ..."
"... The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. ..."
"... As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers. ..."
"... As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using. ..."
"... And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things. ..."
Nov 03, 2018 | www.zdnet.com
Brain drain is a real risk

IBM has not had a particularly great track record when it comes to integrating the cultures of other companies into its own, and brain drain with a company like Red Hat is a real risk because if those employees become unhappy, they can effectively go anywhere they want. They have the skills to command very high salaries at any of the top companies in the industry.

The other issue is that IBM hasn't figured out how to capture revenue from SMBs -- and that has always been elusive for them. Unless a deal is worth at least $1 million, and realistically $10 million, sales guys at IBM don't tend to get motivated.

Also: Red Hat changes its open-source licensing rules

The 5,000-seat and below market segment has traditionally been partner territory, and when it comes to reseller partners for its cloud, IBM is way, way behind AWS, Microsoft, Google, or even (gasp) Oracle, which is now offering serious margins to partners that land workloads on the Oracle cloud.

IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing.

... ... ...

But I think that it is very unlikely the IBM Cloud, even when juiced on Red Hat steroids, will become anything more ambitious than a boutique business for hybrid workloads when compared with AWS or Azure. Realistically, it has to be the kind of cloud platform that interoperates well with the others or nobody will want it.


geek49203_z , Wednesday, April 26, 2017 10:27 AM

Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
geek49203_z , Wednesday, April 26, 2017 10:27 AM
Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
cavman , Wednesday, April 26, 2017 3:58 PM
In the 1970's 80's and 90's I was working in tech support for a company called ROLM. We were doing communications , voice and data and did many systems for Fortune 500 companies along with 911 systems and the secure system at the White House. My job was to fly all over North America to solve problems with customers and integration of our equipment into their business model. I also did BETA trials and documented systems so others would understand what it took to make it run fine under all conditions.

In 84 IBM bought a percentage of the company and the next year they bought out the company. When someone said to me "IBM just bought you out , you must thing you died and went to heaven." My response was "Think of them as being like the Federal Government but making a profit". They were so heavily structured and hide bound that it was a constant battle working with them. Their response to any comments was "We are IBM"

I was working on an equipment project in Colorado Springs and IBM took control. I was immediately advised that I could only talk to the people in my assigned group and if I had a question outside of my group I had to put it in writing and give it to my manager and if he thought it was relevant it would be forwarded up the ladder of management until it reached a level of a manager that had control of both groups and at that time if he thought it was relevant it would be sent to that group who would send the answer back up the ladder.

I'm a Vietnam Veteran and I used my military training to get things done just like I did out in the field. I went looking for the person I could get an answer from.

At first others were nervous about doing that but within a month I had connections all over the facility and started introducing people at the cafeteria. Things moved quickly as people started working together as a unit. I finished my part of the work which was figuring all the spares technicians would need plus the costs for packaging and service contract estimates. I submitted it to all the people that needed it. I was then hauled into a meeting room by the IBM management and advised that I was a disruptive influence and would be removed. Just then the final contracts that vendors had to sign showed up and it used all my info. The IBM people were livid that they were not involved.

By the way a couple months later the IBM THINK magazine came out with a new story about a radical concept they had tried. A cover would not fit on a component and under the old system both the component and the cover would be thrown out and they would start from scratch doing it over. They decided to have the two groups sit together and figure out why it would not fit and correct it on the spot.

Another great example of IBM people is we had a sales contract to install a multi node voice mail system at WANG computers but we lost it because the IBM people insisted on bundling in AS0400 systems into the sale to WANG computer. Instead we lost a multi million dollar contract.

Eventually Siemens bought 50% of the company and eventually full control. Now all we heard was "That is how we do it in Germany" Our response was "How did that WW II thing work out".

Stockholder , Wednesday, April 26, 2017 7:20 PM
The author may have more loyalty to Microsoft than he confides, is the first thing noticeable about this article. The second thing is that in terms of getting rid of those aged IBM workers, I think he may have completely missed the mark, in fairness, that may be the product of his IBM experience, The sheer hubris of tech-talking from the middle of the story and missing the global misstep that is today's IBM is noticeable. As a stockholder, the first question is, "Where is the investigation to the breach of fiduciary duty by a board that owes its loyalty to stockholders who are scratching their heads at the 'positive' spin the likes of Ginni Rometty is putting on 20 quarters of dead losses?" Got that, 20 quarters of losses.

Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right.

IBM's been run into the ground by Ginni, I'll use her first name, since apparently my money is now used to prop up this sham of a leader, who from her uncomfortable public announcement with Tim Cook of Apple, which HAS gone up, by the way, has embraced every political trend, not cause but trend from hiring more women to marginalizing all those old-time white males...You know the ones who produced for the company based on merit, sweat, expertise, all those non-feeling based skills that ultimately are what a shareholder is interested in and replaced them with young, and apparently "social" experts who are pasting some phony "modernity" on a company that under Ginni's leadership has become more of a pet cause than a company.

Finally, regarding ageism and the author's advocacy for the same, IBM's been there, done that as they lost an age discrimination lawsuit decades ago. IBM gave up on doing what it had the ability to do as an enormous business and instead under Rometty's leadership has tried to compete with the scrappy startups where any halfwit knows IBM cannot compete.

The company has rendered itself ridiculous under Rometty, a board that collects paychecks and breaches any notion of fiduciary duty to shareholders, an attempt at partnering with a "mod" company like Apple that simply bolstered Apple and left IBM languishing and a rejection of what has a track record of working, excellence, rewarding effort of employees and the steady plod of performance. Dump the board and dump Rometty.

jperlow Stockholder , Wednesday, April 26, 2017 8:36 PM
Stockholder Your comments regarding any inclination towards age discrimination are duly noted, so I added a qualifier in the piece.
Gravyboat McGee , Wednesday, April 26, 2017 9:00 PM
Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years.

The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. I went from a multi-disciplinary team of engineers working across technologies to support corporate needs in the IT environment to being siloed into a single-function organization.

My first year of on-boarding with IBM was spent deconstructing application integration and cross-organizational structures of support and interwork that I had spent 6 years building and maintaining. Handing off different chunks of work (again, before the outsourcing, an Enterprise solution supported by one multi-disciplinary team) to different IBM GTS work silos that had no physical spacial relationship and no interworking history or habits. What we're talking about here is the notion of "left hand not knowing what the right hand is doing" ...

THAT was the IBM way of doing things, and nothing I've read about them over the past decade or so tells me it has changed.

As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers.

As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using.

And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things.

The "not invented here" ideology was embedded deeply in the souls of all senior IBMers I ever met or worked with ... if you come on board with any outside knowledge or experience, you must not dare to say "this way works better" because you'd be shut down before you could blink. The phrase "best practices" to them means "the way we've always done it".

IBM gave up on innovation long ago. Since the 90's the vast majority of their software has been bought, not built. Buy a small company, strip out the innovation, slap an IBM label on it, sell it as the next coming of Jesus even though they refuse to expend any R&D to push the product to the next level ... damn near everything IBM sold was gentrified, never cutting edge.

And don't get me started on sales practices ... tell the customer how product XYZ is a guaranteed moonshot, they'll be living on lunar real estate in no time at all, and after all the contracts are signed hand the customer a box of nuts & bolts and a letter telling them where they can look up instructions on how to build their own moon rocket. Or for XX dollars more a year, hire a Professional Services IBMer to build it for them.

I have no sympathy for IBM. They need a clean sweep throughout upper management, especially any of the old True Blue hard-core IBMers.

billa201 , Thursday, April 27, 2017 11:24 AM
You obviously have been gone from IBM as they do not treat their employees well anymore and get rid of good talent not keep it a sad state.
ClearCreek , Tuesday, May 9, 2017 7:04 PM
We tried our best to be SMB partners with IBM & Arrow in the early 2000s ... but could never get any traction. I personally needed a mentor, but never found one. I still have/wear some of their swag, and I write this right now on a re-purposed IBM 1U server that is 10 years old, but ... I can't see any way our small company can make $ with them.

Watson is impressive, but you can't build a company on just Watson. This author has some great ideas, yet the phrase that keeps coming to me is internal politics. That corrosive reality has & will kill companies, and it will kill IBM unless it is dealt with.

Turn-arounds are possible (look at MS), but they are hard and dangerous. Hope IBM can figure it out...

[Nov 02, 2018] The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box by Shaun Nichols

Notable quotes:
"... Hole opens up remote-code execution to miscreants – or a crash, if you're lucky ..."
"... You can use NAT with IPv6. ..."
Oct 26, 2018 | theregister.co.uk

Hole opens up remote-code execution to miscreants – or a crash, if you're lucky A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.

The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.

The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.

This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.

Here's the Red Hat Linux summary :

systemd-networkd is vulnerable to an out-of-bounds heap write in the DHCPv6 client when handling options sent by network adjacent DHCP servers. A attacker could exploit this via malicious DHCP server to corrupt heap memory on client machines, resulting in a denial of service or potential code execution.

Felix Wilhelm, of the Google Security team, was credited with discovering the flaw, designated CVE-2018-15688 . Wilhelm found that a specially crafted DHCPv6 network packet could trigger "a very powerful and largely controlled out-of-bounds heap write," which could be used by a remote hacker to inject and execute code.

"The overflow can be triggered relatively easy by advertising a DHCPv6 server with a server-id >= 493 characters long," Wilhelm noted.

In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default.

Systemd creator Lennart Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

If you run a Systemd-based Linux system, and rely on systemd-networkd, update your operating system as soon as you can to pick up the fix when available and as necessary.

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike. Though a number of major admins have in recent years adopted and championed it as the replacement for the old Init era, others within the Linux world seem to still be less than impressed with Systemd and Poettering's occasionally controversial management of the tool. ® Page:

2 3 Next →

Oh Homer , 6 days

Meh

As anyone who bothers to read my comments (BTW "hi" to both of you) already knows, I despise systemd with a passion, but this one is more an IPv6 problem in general.

Yes this is an actual bug in networkd, but IPv6 seems to be far more bug prone than v4, and problems are rife in all implementations. Whether that's because the spec itself is flawed, or because nobody understands v6 well enough to implement it correctly, or possibly because there's just zero interest in making any real effort, I don't know, but it's a fact nonetheless, and my primary reason for disabling it wherever I find it. Which of course contributes to the "zero interest" problem that perpetuates v6's bug prone condition, ad nauseam.

IPv6 is just one of those tech pariahs that everyone loves to hate, much like systemd, albeit fully deserved IMO.

Oh yeah, and here's the obligatory "systemd sucks". Personally I always assumed the "d" stood for "destroyer". I believe the "IP" in "IPv6" stands for "Idiot Protocol".

Anonymous Coward , 6 days
Re: Meh

"nonetheless, and my primary reason for disabling it wherever I find it. "

The very first guide I read to hardening a system recommended disabling services you didn't need and emphasized IPV6 for the reasons you just stated.

Wasn't there a bux in Xorg reported recently as well?

https://www.theregister.co.uk/2018/10/25/x_org_server_vulnerability/

"FreeDesktop.org Might Formally Join Forces With The X.Org Foundation"

https://www.phoronix.com/scan.php?page=news_item&px=FreeDesktop-org-Xorg-Forces

Also, does this mean that Facebook was vulnerable to attack, again?

"Simply put, you could say Facebook loves systemd."

https://www.phoronix.com/scan.php?page=news_item&px=Facebook-systemd-2018

Jay Lenovo , 6 days
Re: Meh

IPv6 and SystemD: Forced industry standard diseases that requires most of us to bite our lips and bear it.

Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

vtcodger , 6 days
Re: Meh
Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

Yeah, fortunately IPv6 is only used by a few fringe organizations like Google and Microsoft.

Seriously, I personally want nothing to do with either systemd or IPv6. Both seem to me to fall into the bin labeled "If it ain't broke, let's break it" But still it's troubling that things that some folks regard as major system components continue to ship with significant security flaws. How can one trust anything connected to the Internet that is more sophisticated and complex than a TV streaming box?

DougS , 6 days
Re: Meh

Was going to say the same thing, and I disable IPv6 for the exact same reason. IPv6 code isn't as well tested, as well audited, or as well targeted looking for exploits as IPv4. Stuff like this only proves that it was smart to wait, and I should wait some more.

Nate Amsden , 6 days
Re: Meh

Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of

https://www.theregister.co.uk/2017/06/29/systemd_pwned_by_dns_query/

bombastic bob , 6 days
The gift that keeps on giving (systemd) !!!

This makes me glad I'm using FreeBSD. The Xorg version in FreeBSD's ports is currently *slightly* older than the Xorg version that had that vulnerability in it. AND, FreeBSD will *NEVER* have systemd in it!

(and, for Linux, when I need it, I've been using Devuan)

That being said, the whole idea of "let's do a re-write and do a 'systemd' instead of 'system V init' because WE CAN and it's OUR TURN NOW, 'modern' 'change for the sake of change' etc." kinda reminds me of recent "update" problems with Win-10-nic...

Oh, and an obligatory Schadenfreude laugh: HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA!!!!!!!!!!!!!!!!!!!

Long John Brass , 6 days
Re: The gift that keeps on giving (systemd) !!!

Finally got all my machines cut over from Debian to Devuan.

Might spin a FreeBSD system up in a VM and have a play.

I suspect that the infestation of stupid into the Linux space won't stop with or be limited to SystemD. I will wait and watch to see what damage the re-education gulag has done to Sweary McSwearFace (Mr Torvalds)

Dan 55 , 6 days
Re: Meh

I despise systemd with a passion, but this one is more an IPv6 problem in general.

Not really, systemd has its tentacles everywhere and runs as root. Exploits which affect systemd therefore give you the keys to the kingdom.

Orv , 3 days
Re: Meh
Not really, systemd has its tentacles everywhere and runs as root.

Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

4 1
Long John Brass , 3 days
Re: Meh
Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

Sorry but utter bullshit. You can if you are so inclined you can use the Linux Capabilities framework for this kind of thing. See https://wiki.archlinux.org/index.php/capabilities

3 0
JohnFen , 6 days
Yay for me

"If you run a Systemd-based Linux system"

I remain very happy that I don't use systemd on any of my machines anymore. :)

"others within the Linux world seem to still be less than impressed with Systemd"

Yep, I'm in that camp. I gave it a good, honest go, but it increased the amount of hassle and pain of system management without providing any noticeable benefit, so I ditched it.

ElReg!comments!Pierre , 2 days
Re: Time to troll

> Just like it's entirely possible to have a Linux system without any GNU in it

Just like it's possible to have a GNU system without Linux on it - ho well as soon as GNU MACH is finally up to the task ;-)

On the systemd angle, I, too, am in the process of switching all my machines from Debian to Devuan but on my personnal(*) network a few systemd-infected machines remain, thanks to a combination of laziness from my part and stubborn "systemd is quite OK" attitude from the raspy foundation. That vuln may be the last straw : one on the aforementionned machines sits on my DMZ, chatting freely with the outside world. Nothing really crucial on it, but i'd hate it if it became a foothold for nasties on my network.

(*) policy at work is RHEL, and that's negociated far above my influence level, but I don't really care as all my important stuff runs on Z/OS anyway ;-) . Ok we have to reboot a few VMs occasionnally when systemd throws a hissy fit -which is surprisingly often for an "enterprise" OS -, but meh.

Destroy All Monsters , 5 days
Re: Not possible

This code is actually pretty bad and should raise all kinds of red flags in a code review.

Anonymous Coward , 5 days
Re: Not possible

ITYM Lennart

Christian Berger , 5 days
Re: Not possible

"This code is actually pretty bad and should raise all kinds of red flags in a code review."

Yeah, but for that you need people who can do code reviews, and also people who can accept criticism. That also means saying "no" to people who are bad at coding, and saying that repeatedly if they don't learn.

SystemD seems to be the area where people gather who want to get code in for their resumes, not for people who actually want to make the world a better place.

26 1
jake , 6 days
There is a reason ...

... that an init, traditionally, is a small bit of code that does one thing very well. Like most of the rest of the *nix core utilities. All an init should do is start PID1, set run level, spawn a tty (or several), handle a graceful shutdown, and log all the above in plaintext to make troubleshooting as simplistic as possible. Anything else is a vanity project that is best placed elsewhere, in it's own stand-alone code base.

Inventing a clusterfuck init variation that's so big and bulky that it needs to be called a "suite" is just asking for trouble.

IMO, systemd is a cancer that is growing out of control, and needs to be cut out of Linux before it infects enough of the system to kill it permanently.

AdamWill , 6 days
Re: There is a reason ...

That's why systemd-networkd is a separate, optional component, and not actually part of the init daemon at all. Most systemd distros do not use it by default and thus are not vulnerable to this unless the user actively disables the default network manager and chooses to use networkd instead.

Anonymous Coward , 4 days
Re: There is a reason ...

"Just go install a default Fedora or Ubuntu system and check for yourself: you'll have systemd, but you *won't* have systemd-networkd running."

Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove).

LP is a fucking arsehole.

Orv , 3 days
Re: There is a reason ...
Pardon my ignorance (I don't use a distro with systemd) why bother with networkd in the first place if you don't have to use it.

Mostly because the old-style init system doesn't cope all that well with systems that move from network to network. It works for systems with a static IP, or that do a DHCP request at boot, but it falls down on anything more dynamic.

In order to avoid restarting the whole network system every time they switch WiFi access points, people have kludged on solutions like NetworkManager. But it's hard to argue it's more stable or secure than networkd. And this is always going to be a point of vulnerability because anything that manipulates network interfaces will have to be running as root.

These days networking is essential to the basic functionality of most computers; I think there's a good argument that it doesn't make much sense to treat it as a second-class citizen.

AdamWill , 2 days
Re: There is a reason ...

"Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove)."

So I looked into it a bit more, and from a few references at least, it seems like Ubuntu has a sort of network configuration abstraction thingy that can use both NM and systemd-networkd as backends; on Ubuntu desktop flavors NM is usually the default, but apparently for recent Ubuntu Server, networkd might indeed be the default. I didn't notice that as, whenever I want to check what's going on in Ubuntu land, I tend to install the default desktop spin...

"LP is a fucking arsehole."

systemd's a lot bigger than Lennart, you know. If my grep fu is correct, out of 1543 commits to networkd, only 298 are from Lennart...

1 0
alain williams , 6 days
Old is good

in many respects when it comes to software because, over time, the bugs will have been found and squashed. Systemd brings in a lot of new code which will, naturally, have lots of bugs that will take time to find & remove. This is why we get problems like this DHCP one.

Much as I like the venerable init: it did need replacing. Systemd is one way to go, more flexible, etc, etc. Something event driven is a good approach.

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility. They should have concentrated on adding ways of talking to existing daemons, eg dhcpd, through an API/something. This would have reused old code (good) and allowed other implementations to use the API - this letting people choose what they wanted to run.

But no: Poettering seems to want to build a Cathedral rather than a Bazzar.

He appears to want to make it his way or no way. This is bad, one reason that *nix is good is because different solutions to a problem have been able to be chosen, one removed and another slotted in. This encourages competition and the 'best of breed' comes out on top. Poettering is endangering that process.

Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty.

oiseau , 4 days
Re: Old is good

Hello:

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility.

IMO, there is a striking paralell between systemd and the registry in Windows OSs.

After many years of dealing with the registry (W98 to XPSP3) I ended up seeing the registry as a sort of developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the OS with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what was going on inside your box/the OS.

Years later, when I learned about the existence of systemd (I was already running Ubuntu) and read up on what it did and how it did it, it dawned on me that systemd was nothing more than a registry class virus and it was infecting Linux_land at the behest of the developers involved.

So I moved from Ubuntu to PCLinuxOS and then on to Devuan.

Call me paranoid but I am convinced that there are people both inside and outside IT that actually want this and are quite willing to pay shitloads of money for it to happen.

I don't see this MS cozying up to Linux in various ways lately as a coincidence: these things do not happen just because or on a senior manager's whim.

What I do see (YMMV) is systemd being a sort of convergence of Linux with Windows, which will not be good for Linux and may well be its undoing.

Cheers,

O.

Rich 2 , 4 days
Re: Old is good

"Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty"

Thank goodness this crap is unlikely to escape from Linux!

By the way, for a systemd-free Linux, try void - it's rather good.

Michael Wojcik , 3 days
Re: Old is good

Much as I like the venerable init: it did need replacing.

For some use cases, perhaps. Not for any of mine. SysV init, or even BSD init, does everything I need a Linux or UNIX init system to do. And I don't need any of the other crap that's been built into or hung off systemd, either.

Orv , 3 days
Re: Old is good

BSD init and SysV init work pretty darn well for their original purpose -- servers with static IP addresses that are rebooted no more than once in a fortnight. Anything more dynamic starts to give it trouble.

Chairman of the Bored , 6 days
Too bad Linus swore off swearing

Situations like this go beyond a little "golly gee, I screwed up some C"...

jake , 6 days
Re: Too bad Linus swore off swearing

Linus doesn't care. systemd has nothing to do with the kernel ... other than the fact that the lead devs for systemd have been banned from working on the kernel because they don't play nice with others.

JLV , 6 days
how did it get to this?

I've been using runit, because I am too lazy and clueless to write init scripts reliably. It's very lightweight, runs on a bunch of systems and really does one thing - keep daemons up.

I am not saying it's the best - but it looks like it has a very small codebase, it doesn't do much and generally has not bugged me after I configured each service correctly. I believe other systems also exist to avoid using init scripts directly. Not Monit, as it relies on you configuring the daemon start/stop commands elsewhere.

On the other hand, systemd is a massive sprawl, does a lot of things - some of them useful, like dependencies and generally has needed more looking after. Twice I've had errors on a Django server that, after a lot of looking around ended up because something had changed in the, Chef-related, code that's exposed to systemd and esoteric (not emitted by systemd) errors resulted when systemd could not make sense of the incorrect configuration.

I don't hate it - init scripts look a bit antiquated to me and they seem unforgiving to beginners - but I don't much like it. What I certainly do hate is how, in an OS that is supposed to be all about choice, sometime excessively so as in the window manager menagerie, we somehow ended up with one mandatory daemon scheduler on almost all distributions. Via, of all types of dependencies, the GUI layer. For a window manager that you may not even have installed.

Talk about the antithesis of the Unix philosophy of do one thing, do it well.

Oh, then there are also the security bugs and the project owner is an arrogant twat. That too.

Doctor Syntax , 6 days
Re: how did it get to this?

"init scripts look a bit antiquated to me and they seem unforgiving to beginners"

Init scripts are shell scripts. Shell scripts are as old as Unix. If you think that makes them antiquated then maybe Unix-like systems are not for you. In practice any sub-system generally gets its own scripts installed with the rest of the S/W so if being unforgiving puts beginners off tinkering with them so much the better. If an experienced Unix user really needs to modify one of the system-provided scripts their existing shell knowledge will let them do exactly what's needed. In the extreme, if you need to develop a new init script then you can do so in the same way as you'd develop any other script - edit and test from the command line.

33 4
onefang , 6 days
Re: how did it get to this?

"Init scripts are shell scripts."

While generally true, some sysv init style inits can handle init "scripts" written in any language.

sed gawk , 6 days
Re: how did it get to this?

I personally like openrc as an init system, but systemd is a symptom of the tooling problem.

It's for me a retrograde step but again, it's linux, one can, as you and I do, just remove systemd.

There are a lot of people in the industry now who don't seem able to cope with shell scripts nor are minded to research the arguments for or against shell as part of a unix style of system design.

In conclusion, we are outnumbered, but it will eventually collapse under its own weight and a worthy successor shall rise, perhaps called SystemV, might have to shorten that name a bit.

AdamWill , 6 days
Just about nothing actually uses networkd

"In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default."

I can tell you for sure that no version of Fedora does, either, and I'm fairly sure that neither does Debian, SLES or Mint. I don't know anything much about CoreOS, but https://coreos.com/os/docs/latest/network-config-with-networkd.html suggests it actually *might* use systemd-networkd.

systemd-networkd is not part of the core systemd init daemon. It's an optional component, and most distros use some other network manager (like NetworkManager or wicd) by default.

Christian Berger , 5 days
The important word here is "still"

I mean commercial distributions seem to be particularly interested in trying out new things that can increase their number of support calls. It's probably just that networkd is either to new and therefore not yet in the release, or still works so badly even the most rudimentary tests fail.

There is no reason to use that NTP daemon of systemd, yet more and more distros ship with it enabled, instead of some sane NTP-server.

NLCSGRV , 6 days
The Curse of Poettering strikes again.
_LC_ , 6 days
Now hang on, please!

Ser iss no neet to worry, systemd will becum stable soon after PulseAudio does.

Ken Hagan , 6 days
Re: Now hang on, please!

I won't hold my breath, then. I have a laptop at the moment that refuses to boot because (as I've discovered from looking at the journal offline) pulseaudio is in an infinite loop waiting for the successful detection of some hardware that, presumably, I don't have.

I imagine I can fix it by hacking the file-system (offline) so that fuckingpulse is no longer part of the boot configuration, but I shouldn't have to. A decent init system would be able to kick of everything else in parallel and if one particular service doesn't come up properly then it just logs the error. I *thought* that was one of the claimed advantages of systemd, but apparently that's just a load of horseshit.

26 0
Obesrver1 , 5 days
Reason for disabling IVP6

That it punches thru NAT routers enabling all your little goodies behind them as directly accessible.

MS even supplies tunneling (Ivp4 to Ivp6) so if using Linux in a VM on a MS system you may still have it anyway.

NAT was always recommended to be used in hardening your system, I prefer to keep all my idIoT devices behind one.

As they are just Idiot devices.

In future I will need a NAT that acts as a DNS and offers some sort of solution to keeping Ivp4.

Orv , 3 days
Re: Reason for disabling IVP6

My NAT router statefully firewalls incoming IPv6 by default, which I consider equivalently secure. NAT adds security mostly by accident, because it de-facto adds a firewall that blocks incoming packets. It's not the address translation itself that makes things more secure, it's the inability to route in from the outside.

dajames , 3 days
Re: Reason for disabling IVP6

You can use NAT with IPv6.

You can, but why would you want to.

NAT is schtick for connecting a whole LAN to a WAN using a single IPv4 address (useful with IPv4 because most ISPs don't give you a /24 when you sign up). If you have a native IPv6 address you'll have something like 2^64 addresses, so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT.

Using NAT with IPv6 is just missing the point.

JohnFen , 3 days
Re: Reason for disabling IVP6

"so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT."

Avoiding that configuration is exactly the use case for using NAT with IPv6. As others have pointed out, you can accomplish the same thing with IPv6 router configuration, but NAT is easier in terms of configuration and maintenance. Given that, and assuming that you don't want to be able to have arbitrary machines open ports that are visible to the internet, then why not use NAT?

Also, if your goal is to make people more likely to move to IPv6, pointing out IPv4 methods that will work with IPv6 (even if you don't consider them optimal) seems like a really, really good idea. It eases the transition.

Destroy All Monsters , 5 days
Please El Reg these stories make ma rage at breakfast, what's this?

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike.

Less against systemd (which should get attacked on the design & implementation level) or against IPv6 than against the use of buffer-overflowable languages in 2018 in code that processes input from the Internet (it's not the middle ages anymore) or at least very hard linting of the same.

But in the end, what did it was a violation of the Don't Repeat Yourself principle and lack of sufficently high-level datastructures. Pointer into buffer, and the remaining buffer length are two discrete variables that need to be updated simultaneously to keep the invariant and this happens in several places. This is just a catastrophe waiting to happen. You forget to update it once, you are out! Use structs and functions updating the structs correctly.

And use assertions in the code , this stuff all seems disturbingly assertion-free.

Excellent explanation by Felix Wilhelm:

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1795921

The function receives a pointer to the option buffer buf, it's remaining size buflen and the IA to be added to the buffer. While the check at (A) tries to ensure that the buffer has enough space left to store the IA option, it does not take the additional 4 bytes from the DHCP6Option header into account (B). Due to this the memcpy at (C) can go out-of-bound and *buflen can underflow [i.e. you suddenly have a gazillion byte buffer, Ed.] in (D) giving an attacker a very powerful and largely controlled OOB heap write starting at (E).

TheSkunkyMonk , 5 days
Init is 1026 lines of code in one file and it works great.
Anonymous Coward , 5 days
"...and Poettering's occasionally controversial management of the tool."

Shouldn't that be "...Potterings controversial management as a tool."?

clocKwize , 4 days
Re: Contractor rights

why don't we stop writing code in languages that make it easy to screw up so easily like this?

There are plenty about nowadays, I'd rather my DHCP client be a little bit slower at processing packets if I had more confidence it would not process then incorrectly and execute code hidden in said packets...

Anonymous Coward , 4 days
Switch, as easy as that

The circus that is called "Linux" have forced me to Devuan and the likes however the circus is getting worse and worse by the day, thus I have switched to the BSD world, I will learn that rather than sit back and watch this unfold As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary, thus I bid the Linux world a farewell after 10 years of support, I will watch the grass dry out from the other side of the fence, It was destined to fail by means of infiltration and screw it up motive(s) on those we do not mention here.

oiseau , 3 days
Re: Switch, as easy as that

Hello:

As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary ...

Indeed, it was rather quick and is very scary.

But there's really no need to dig much, just reason it out.

It's like a follow the money situation of sorts.

I'll try to sum it up in three short questions:

Q1: Hasn't the Linux philosophy (programs that do one thing and do it well) been a success?

A1: Indeed, in spite of the many init systems out there, it has been a success in stability and OS management. And it can easily be tested and debugged, which is an essential requirement.

Q2: So what would Linux need to have the practical equivalent of the registry in Windows for?

A2: So that whatever the registry does in/to Windows can also be done in/to Linux.

Q3: I see. And just who would want that to happen? Makes no sense, it is a huge step backwards.

A3: ....

Cheers,

O.

Dave Bell , 4 days
Reporting weakness

OK, so I was able to check through the link you provided, which says "up to and including 239", but I had just installed a systemd update and when you said there was already a fix written, working it's way through the distro update systems, all I had to do was check my log.

Linux Mint makes it easy.

But why didn't you say something such as "reported to affect systemd versions up to and including 239" and then give the link to the CVE? That failure looks like rather careless journalism.

W.O.Frobozz , 3 days
Hmm.

/sbin/init never had these problems. But then again /sbin/init didn't pretend to be the entire operating system.

[Nov 01, 2018] IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Notable quotes:
"... I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible! ..."
"... IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on! ..."
"... What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it? ..."
"... I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop. ..."
"... After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL" ..."
Nov 01, 2018 | theregister.co.uk

Edwin Tumblebunny Ashes to ashes, dust to dust - Red Hat is dead.

Red Hat will be a distant memory in a few years as it gets absorbed by the abhorrent IBM culture and its bones picked clean by the IBM beancounters. Nothing good ever happens to a company bought by IBM.

I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Some examples:

The on-site IBM employees (and contractors) had to use Lotus Notes for email. That was probably the worst piece of software I have ever used - I think baboons on drugs could have done a better design job. IBM set up a T1 (1.54 Mbps) link between the customer and the local IBM hub for email, etc. It sounds great until you realize there were over 150 people involved and due to the settings of Notes replication, it could often take over an hour to actually download email to read.

To do my job I needed to install some IBM software. My PC did not have enough disk space for this software as well as the other software I needed. Rather than buy me a bigger hard disk I had to spend 8 hours a week installing and reinstalling software to do my job.

I waited three months for a $50 stick of memory to be approved. When it finally arrived my machine had been changed out (due to a new customer project) and the memory was not compatible! Since I worked on a lot of projects I often had machines supplied by the customer on my desk. So, I would use one of these as my personal PC and would get an upgrade when the next project started!

I was told I could not be supplied with a laptop or desktop from IBM as they were too expensive (my IBM division did not want to spend money on anything). IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on!

IBM has many strange and weird processes that allow them to circumvent the contract they have with their preferred contractor companies. This meant that for a number of years I ended up getting a pay cut. What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it?

Eventually I was approved to get a laptop and excitedly watched it move slowly through the delivery system. I got confused when it was reported as delivered to Ohio rather than my work (not in Ohio). After some careful searching I discovered that my manager and his wife both worked for IBM from their home in, yes you can guess, Ohio. It looked like he had redirected my new laptop for his own use and most likely was going to send me his old one and claim it was a new one. I never got the chance to confront him about it, though, as IBM lost the contract with the customer that month and before the laptop should have arrived IBM was out! I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop.

After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL"

Certain Hollywood stars seem to be psychic types : https://twitter.com/JimCarrey/status/1057328878769721344

rmstock , 2 days

Re: "DON'T FOLLOW THE RED HAT TO HELL"

I sense that a global effort is ongoing to shutdown open source software by brute force. First, the enforcement of the EU General Data Protection Regulation (GDPR) by ICANN.org to enable untraceable takeovers of domains. Microsoft buying github. Linus Torvalds forced out of his own Linux kernel project because of the Code of Conduct and now IBM buying RedHat. I wrote the following at https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/ "Torvalds should lawyer up. The problems are the large IT Tech firms who platinum donated all over the place in Open Source land. When IBM donated with 1 billion USD to Linux in 2000 https://itsfoss.com/ibm-invest-1-billion-linux/ a friend who vehemently was against the GPL and what Torvalds was doing, told me that in due time OSS would simply just go away.

These Community Organizers, not Coders per se, are on a mission to overtake and control the Linux Foundation, and if they can't, will search and destroy all of it, even if it destroys themselves. Coraline is merely a expendable pion here. Torvalds is now facing unjust confrontations and charges resembling the nomination of Judge Brett Kavanaugh. Looking at the CoC document it even might have been written by a Google executive, who themselves currently are facing serious charges and lawsuits from their own Code of Conduct. See theintercept.com, their leaked video the day after the election of 2016. They will do anything to pursue this. However to pursue a personal bias or agenda furnishing enactments or acts such as to, omit contradicting facts (code), commit perjury, attend riots and harassments, cleanse Internet archives and search engines of exculpatory evidence and ultimately hire hit-men to exterminate witnesses of truth (developers), in an attempt to elevate bias as fabricated fact (code) are crimes and should be prosecuted accordingly."

[Nov 01, 2018] Will Red Hat Survive IBM Ownership

It does not matter if somebody "stresses independence word are cheap. The mere fact that this is not IBM changes relationships. also IBM executives need to show "leadership" and that entails some "general direction" for Red Hat from now on. At least with Microsoft and HP relationship will be much cooler, then before.
Also IBM does not like "charity" projects like CentOS and that will be affected too, no matter what executives tell you right now. Paradoxically this greatly strengthen the position of Oracle Linux.
The status of IBM software in corporate world 9outside finance companies) is low and their games with licenses (licensing products per core, etc) are viewed by most of their customers as despicable. This was one of the reasons IBM lost it share in enterprise software. For example, greed in selling Tivoli more or less killed the software. All credits to Lou Gerstner who initially defined this culture of relentless outsource and cult of the "bottom line" (which was easy for his because he does not understood technology at all). His successor was even more active in driving company into the ground. R ampant cost-arbitrage driven offshoring has left a legacy of dissatisfied customers. Most projects are over priced. and most of those which were priced more or less on industry level had bad quality results and costs overruns.
IBM cut severance pay to 1 month, is firing people left and right and is insensitive to the fired employees and the result is enormous negativity for them. Good people are scared to work for them and people are asking tough questions.
It's strategy since Gerstner. Palmisano (this guys with history diploma witched into Cybersecurity after retirement in 2012 and led Obama cybersecurity commission) & Ginni Rometty was completely neoliberal mantra "share holder value", grow earnings per share at all costs. They all manage IBM for financial appearance rather than the quaility products. There was no focus on breakthrough-innovation, market leadership in mega-trends (like cloud computing) or even giving excellent service to customers.
Ginni Rometty accepted bonuses during times of layoffs and outsourcing.
When company have lost it architectural talent the brand will eventually die. IBM still has very high number of patents every year. It is still a very large company in terms of revenue and number of employees. However, there are strong signs that its position in the technology industry might deteriorate further.
Nov 01, 2018 | www.itprotoday.com

Cormier also stressed Red Hat independence when we asked how the acquisition would affect ongoing partnerships in place with Amazon Web Services, Microsoft Azure, and other public clouds.

"One of the things you keep seeing in this is that we're going to run Red Hat as a separate entity within IBM," he said. "One of the reasons is business. We need to, and will, remain Switzerland in terms of how we interact with our partners. We're going to continue to prioritize what we do for our partners within our products on a business case perspective, including IBM as a partner. We're not going to do unnatural things. We're going to do the right thing for the business, and most importantly, the customer, in terms of where we steer our products."

Red Hat promises that independence will extend to Red Hat's community projects, such as its freely available Fedora Linux distribution which is widely used by developers. When asked what impact the sale would have on Red Hat maintained projects like Fedora and CentoOS, Cormier replied, "None. We just came from an all-hands meeting from the company around the world for Red Hat, and we got asked this question and my answer was that the day after we close, I don't intend to do anything different or in any other way than we do our every day today. Arvind, Ginnie, Jim, and I have talked about this extensively. For us, it's business as usual. Whatever we were going to do for our road maps as a stand alone will continue. We have to do what's right for the community, upstream, our associates, and our business."

This all sounds good, but as they say, talk is cheap. Six months from now we'll have a better idea of how well IBM can walk the walk and leave Red Hat alone. If it can't and Red Hat doesn't remain clearly distinct from IBM ownership control, then Big Blue will have wasted $33 billion it can't afford to lose and put its future, as well as the future of Red Hat, in jeopardy.

[Oct 30, 2018] There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered within their span of control. As they grew older corporations threw them out like an empty can

Notable quotes:
"... The other alternative is a market-based life that, for many, will be cruel, brutish, and short. ..."
Oct 30, 2018 | features.propublica.org

Lorilynn King

Step back and think about this for a minute. There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered (within their span of control... I'm not going to get into a discussion of how IBM pulls the rug out from underneath contracts after they've been signed).

These people were, and still are, high performers, they are committed to the job and the purpose that has been communicated to them by their peers, management, and customers; and they take the time (their OWN time) to pick up new skills and make sure that they are still current and marketable. They do this because they are committed to doing the job to the best of their ability.... it's what makes them who they are.

IBM (and other companies) are firing these very people ***for one reason and one reason ONLY***: their AGE. They have the skills and they're doing their jobs. If the same person was 30 you can bet that they'd still be there. Most of the time it has NOTHING to do with performance or lack of concurrency. Once the employee is fired, the job is done by someone else. The work is still there, but it's being done by someone younger and/or of a different nationality.

The money that is being saved by these companies has to come from somewhere. People that are having to withdraw their retirement savings 20 or so years earlier than planned are going to run out of funds.... and when they're in nursing homes, guess who is going to be supporting them? Social security will be long gone, their kids have their own monetary challenges.... so it will be government programs.... maybe.

This is not just a problem that impacts the 40 and over crowd. This is going to impact our entire society for generations to come.

NoPolitician
The business reality you speak of can be tempered via government actions. A few things:
  • One of the major hardships here is laying someone off when they need income the most - to pay for their children's college education. To mitigate this, as a country we could make a public education free. That takes off a lot of the sting, some people might relish a change in career when they are in their 50s except that the drop in salary is so steep when changing careers.
  • We could lower the retirement age to 55 and increase Social Security to more than a poverty-level existence.Being laid off when you're 50 or 55 - with little chance to be hired anywhere else - would not hurt as much.
  • We could offer federal wage subsidies for older workers to make them more attractive to hire. While some might see this as a thumb on the scale against younger workers, in reality it would be simply a counterweight to the thumb that is already there against older workers.
  • Universal health care equalizes the cost of older and younger workers.

The other alternative is a market-based life that, for many, will be cruel, brutish, and short.

[Oct 30, 2018] Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

Oct 30, 2018 | features.propublica.org

Al Romig , Wednesday, April 18, 2018 5:20 AM

As a new engineering graduate, I joined a similar-sized multinational US-based company in the early '70s. Their recruiting pitch was, "Come to work here, kid. Do your job, keep your nose clean, and you will enjoy great, secure work until you retire on easy street".

Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

GoingGone , Friday, April 13, 2018 6:06 PM
As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally. Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service. All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done. The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.
Paul V Sutera , Tuesday, April 3, 2018 7:33 PM
In 1994, I saved my job at IBM for the first time, and survived. But I was 36 years old. I sat down at the desk of a man in his 50s, and found a few odds and ends left for me in the desk. Almost 20 years later, it was my turn to go. My health and well-being is much better now. Less money but better health. The sins committed by management will always be: "I was just following orders".

[Oct 30, 2018] IBM age discrimination

Notable quotes:
"... Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story. ..."
Oct 30, 2018 | features.propublica.org

Consider, for example, a planning presentation that former IBM executives said was drafted by heads of a business unit carved out of IBM's once-giant software group and charged with pursuing the "C," or cloud, portion of the company's CAMS strategy.

The presentation laid out plans for substantially altering the unit's workforce. It was shown to company leaders including Diane Gherson, the senior vice president for human resources, and James Kavanaugh, recently elevated to chief financial officer. Its language was couched in the argot of "resources," IBM's term for employees, and "EP's," its shorthand for early professionals or recent college graduates.

Among the goals: "Shift headcount mix towards greater % of Early Professional hires." Among the means: "[D]rive a more aggressive performance management approach to enable us to hire and replace where needed, and fund an influx of EPs to correct seniority mix." Among the expected results: "[A] significant reduction in our workforce of 2,500 resources."

A slide from a similar presentation prepared last spring for the same leaders called for "re-profiling current talent" to "create room for new talent." Presentations for 2015 and 2016 for the 50,000-employee software group also included plans for "aggressive performance management" and emphasized the need to "maintain steady attrition to offset hiring."

IBM declined to answer questions about whether either presentation was turned into company policy. The description of the planned moves matches what hundreds of older ex-employees told ProPublica they believe happened to them: They were ousted because of their age. The company used their exits to hire replacements, many of them young; to ship their work overseas; or to cut its overall headcount.

Ed Alpern, now 65, of Austin, started his 39-year run with IBM as a Selectric typewriter repairman. He ended as a project manager in October of 2016 when, he said, his manager told him he could either leave with severance and other parting benefits or be given a bad job review -- something he said he'd never previously received -- and risk being fired without them.

Albert Poggi, now 70, was a three-decade IBM veteran and ran the company's Palisades, New York, technical center where clients can test new products. When notified in November of 2016 he was losing his job to layoff, he asked his bosses why, given what he said was a history of high job ratings. "They told me," he said, "they needed to fill it with someone newer."

The presentations from the software group, as well as the stories of ex-employees like Alpern and Poggi, square with internal documents from two other major IBM business units. The documents for all three cover some or all of the years from 2013 through the beginning of 2018 and deal with job assessments, hiring, firing and layoffs.

The documents detail practices that appear at odds with how IBM says it treats its employees. In many instances, the practices in effect, if not intent, tilt against the company's older U.S. workers.

For example, IBM spokespeople and lawyers have said the company never considers a worker's age in making decisions about layoffs or firings.

But one 2014 document reviewed by ProPublica includes dates of birth. An ex-IBM employee familiar with the process said executives from one business unit used it to decide about layoffs or other job changes for nearly a thousand workers, almost two-thirds of them over 50.

Documents from subsequent years show that young workers are protected from cuts for at least a limited period of time. A 2016 slide presentation prepared by the company's global technology services unit, titled "U.S. Resource Action Process" and used to guide managers in layoff procedures, includes bullets for categories considered "ineligible" for layoff. Among them: "early professional hires," meaning recent college graduates.

In responding to age-discrimination complaints that ex-employees file with the EEOC, lawyers for IBM say that front-line managers make all decisions about who gets laid off, and that their decisions are based strictly on skills and job performance, not age.

But ProPublica reviewed spreadsheets that indicate front-line managers hardly acted alone in making layoff calls. Former IBM managers said the spreadsheets were prepared for upper-level executives and kept continuously updated. They list hundreds of employees together with codes like "lift and shift," indicating that their jobs were to be lifted from them and shifted overseas, and details such as whether IBM's clients had approved the change.

An examination of several of the spreadsheets suggests that, whatever the criteria for assembling them, the resulting list of those marked for layoff was skewed toward older workers. A 2016 spreadsheet listed more than 400 full-time U.S. employees under the heading "REBAL," which refers to "rebalancing," the process that can lead to laying off workers and either replacing them or shifting the jobs overseas. Using the job search site LinkedIn, ProPublica was able to locate about 100 of these employees and then obtain their ages through public records. Ninety percent of those found were 40 or older. Seventy percent were over 50.

IBM frequently cites its history of encouraging diversity in its responses to EEOC complaints about age discrimination. "IBM has been a leader in taking positive actions to ensure its business opportunities are made available to individuals without regard to age, race, color, gender, sexual orientation and other categories," a lawyer for the company wrote in a May 2017 letter. "This policy of non-discrimination is reflected in all IBM business activities."

But ProPublica found at least one company business unit using a point system that disadvantaged older workers. The system awarded points for attributes valued by the company. The more points a person garnered, according to the former employee, the more protected she or he was from layoff or other negative job change; the fewer points, the more vulnerable.

The arrangement appears on its face to favor younger newcomers over older veterans. Employees were awarded points for being relatively new at a job level or in a particular role. Those who worked for IBM for fewer years got more points than those who'd been there a long time.

The ex-employee familiar with the process said a 2014 spreadsheet from that business unit, labeled "IBM Confidential," was assembled to assess the job prospects of more than 600 high-level employees, two-thirds of them from the U.S. It included employees' years of service with IBM, which the former employee said was used internally as a proxy for age. Also listed was an assessment by their bosses of their career trajectories as measured by the highest job level they were likely to attain if they remained at the company, as well as their point scores.

The tilt against older workers is evident when employees' years of service are compared with their point scores. Those with no points and therefore most vulnerable to layoff had worked at IBM an average of more than 30 years; those with a high number of points averaged half that.

Perhaps even more striking is the comparison between employees' service years and point scores on the one hand and their superiors' assessments of their career trajectories on the other.

Along with many American employers, IBM has argued it needs to shed older workers because they're no longer at the top of their games or lack "contemporary" skills.

But among those sized up in the confidential spreadsheet, fully 80 percent of older employees -- those with the most years of service but no points and therefore most vulnerable to layoff -- were rated by superiors as good enough to stay at their current job levels or be promoted. By contrast, only a small percentage of younger employees with a high number of points were similarly rated.

"No major company would use tools to conduct a layoff where a disproportionate share of those let go were African Americans or women," said Cathy Ventrell-Monsees, senior attorney adviser with the EEOC and former director of age litigation for the senior lobbying giant AARP. "There's no difference if the tools result in a disproportionate share being older workers."

In addition to the point system that disadvantaged older workers in layoffs, other documents suggest that IBM has made increasingly aggressive use of its job-rating machinery to pave the way for straight-out firings, or what the company calls "management-initiated separations." Internal documents suggest that older workers were especially targets.

Like in many companies, IBM employees sit down with their managers at the start of each year and set goals for themselves. IBM graded on a scale of 1 to 4, with 1 being top-ranked.

Those rated as 3 or 4 were given formal short-term goals known as personal improvement plans, or PIPs. Historically many managers were lenient, especially toward those with 3s whose ratings had dropped because of forces beyond their control, such as a weakness in the overall economy, ex-employees said.

But within the past couple of years, IBM appears to have decided the time for leniency was over. For example, a software group planning document for 2015 said that, over and above layoffs, the unit should seek to fire about 3,000 of the unit's 50,000-plus workers.

To make such deep cuts, the document said, executives should strike an "aggressive performance management posture." They needed to double the share of employees given low 3 and 4 ratings to at least 6.6 percent of the division's workforce. And because layoffs cost the company more than outright dismissals or resignations, the document said, executives should make sure that more than 80 percent of those with low ratings get fired or forced to quit.

Finally, the 2015 document said the division should work "to attract the best and brightest early professionals" to replace up to two-thirds of those sent packing. A more recent planning document -- the presentation to top executives Gherson and Kavanaugh for a business unit carved out of the software group -- recommended using similar techniques to free up money by cutting current employees to fund an "influx" of young workers.

In a recent interview, Poggi said he was resigned to being laid off. "Everybody at IBM has a bullet with their name on it," he said. Alpern wasn't nearly as accepting of being threatened with a poor job rating and then fired.

Alpern had a particular reason for wanting to stay on at IBM, at least until the end of last year. His younger son, Justin, then a high school senior, had been named a National Merit semifinalist. Alpern wanted him to be able to apply for one of the company's Watson scholarships. But IBM had recently narrowed eligibility so only the children of current employees could apply, not also retirees as it was until 2014.

Alpern had to make it through December for his son to be eligible.

But in August, he said, his manager ordered him to retire. He sought to buy time by appealing to superiors. But he said the manager's response was to threaten him with a bad job review that, he was told, would land him on a PIP, where his work would be scrutinized weekly. If he failed to hit his targets -- and his managers would be the judges of that -- he'd be fired and lose his benefits.

Alpern couldn't risk it; he retired on Oct. 31. His son, now a freshman on the dean's list at Texas A&M University, didn't get to apply.

"I can think of only a couple regrets or disappointments over my 39 years at IBM,"" he said, "and that's one of them."

'Congratulations on Your Retirement!'

Like any company in the U.S., IBM faces few legal constraints to reducing the size of its workforce. And with its no-disclosure strategy, it eliminated one of the last regular sources of information about its employment practices and the changing size of its American workforce.

But there remained the question of whether recent cutbacks were big enough to trigger state and federal requirements for disclosure of layoffs. And internal documents, such as a slide in a 2016 presentation titled "Transforming to Next Generation Digital Talent," suggest executives worried that "winning the talent war" for new young workers required IBM to improve the "attractiveness of (its) culture and work environment," a tall order in the face of layoffs and firings.

So the company apparently has sought to put a softer face on its cutbacks by recasting many as voluntary rather than the result of decisions by the firm. One way it has done this is by converting many layoffs to retirements.

Some ex-employees told ProPublica that, faced with a layoff notice, they were just as happy to retire. Others said they felt forced to accept a retirement package and leave. Several actively objected to the company treating their ouster as a retirement. The company nevertheless processed their exits as such.

Project manager Ed Alpern's departure was treated in company paperwork as a voluntary retirement. He didn't see it that way, because the alternative he said he was offered was being fired outright.

Lorilynn King, a 55-year-old IT specialist who worked from her home in Loveland, Colorado, had been with IBM almost as long as Alpern by May 2016 when her manager called to tell her the company was conducting a layoff and her name was on the list.

King said the manager told her to report to a meeting in Building 1 on IBM's Boulder campus the following day. There, she said, she found herself in a group of other older employees being told by an IBM human resources representative that they'd all be retiring. "I have NO intention of retiring," she remembers responding. "I'm being laid off."

ProPublica has collected documents from 15 ex-IBM employees who got layoff notices followed by a retirement package and has talked with many others who said they received similar paperwork. Critics say the sequence doesn't square well with the law.

"This country has banned mandatory retirement," said Seiner, the University of South Carolina law professor and former EEOC appellate lawyer. "The law says taking a retirement package has to be voluntary. If you tell somebody 'Retire or we'll lay you off or fire you,' that's not voluntary."

Until recently, the company's retirement paperwork included a letter from Rometty, the CEO, that read, in part, "I wanted to take this opportunity to wish you well on your retirement While you may be retiring to embark on the next phase of your personal journey, you will always remain a valued and appreciated member of the IBM family." Ex-employees said IBM stopped sending the letter last year.

IBM has also embraced another practice that leads workers, especially older ones, to quit on what appears to be a voluntary basis. It substantially reversed its pioneering support for telecommuting, telling people who've been working from home for years to begin reporting to certain, often distant, offices. Their other choice: Resign.

David Harlan had worked as an IBM marketing strategist from his home in Moscow, Idaho, for 15 years when a manager told him last year of orders to reduce the performance ratings of everybody at his pay grade. Then in February last year, when he was 50, came an internal video from IBM's new senior vice president, Michelle Peluso, which announced plans to improve the work of marketing employees by ordering them to work "shoulder to shoulder." Those who wanted to stay on would need to "co-locate" to offices in one of six cities.

Early last year, Harlan received an email congratulating him on "the opportunity to join your team in Raleigh, North Carolina." He had 30 days to decide on the 2,600-mile move. He resigned in June.

David Harlan worked for IBM for 15 years from his home in Moscow, Idaho, where he also runs a drama company. Early last year, IBM offered him a choice: Move 2,600 miles to Raleigh-Durham to begin working at an office, or resign. He left in June. (Rajah Bose for ProPublica)

After the Peluso video was leaked to the press, an IBM spokeswoman told the Wall Street Journal that the " vast majority " of people ordered to change locations and begin reporting to offices did so. IBM Vice President Ed Barbini said in an initial email exchange with ProPublica in July that the new policy affected only about 2,000 U.S. employees and that "most" of those had agreed to move.

But employees across a wide range of company operations, from the systems and technology group to analytics, told ProPublica they've also been ordered to co-locate in recent years. Many IBMers with long service said that they quit rather than sell their homes, pull children from school and desert aging parents. IBM declined to say how many older employees were swept up in the co-location initiative.

"They basically knew older employees weren't going to do it," said Eileen Maroney, a 63-year-old IBM product manager from Aiken, South Carolina, who, like Harlan, was ordered to move to Raleigh or resign. "Older people aren't going to move. It just doesn't make any sense." Like Harlan, Maroney left IBM last June.

Having people quit rather than being laid off may help IBM avoid disclosing how much it is shrinking its U.S. workforce and where the reductions are occurring.

Under the federal WARN Act , adopted in the wake of huge job cuts and factory shutdowns during the 1980s, companies laying off 50 or more employees who constitute at least one-third of an employer's workforce at a site have to give advance notice of layoffs to the workers, public agencies and local elected officials.

Similar laws in some states where IBM has a substantial presence are even stricter. California, for example, requires advanced notice for layoffs of 50 or more employees, no matter what the share of the workforce. New York requires notice for 25 employees who make up a third.

Because the laws were drafted to deal with abrupt job cuts at individual plants, they can miss reductions that occur over long periods among a workforce like IBM's that was, at least until recently, widely dispersed because of the company's work-from-home policy.

IBM's training sessions to prepare managers for layoffs suggest the company was aware of WARN thresholds, especially in states with strict notification laws such as California. A 2016 document entitled "Employee Separation Processing" and labeled "IBM Confidential" cautions managers about the "unique steps that must be taken when processing separations for California employees."

A ProPublica review of five years of WARN disclosures for a dozen states where the company had large facilities that shed workers found no disclosures in nine. In the other three, the company alerted authorities of just under 1,000 job cuts -- 380 in California, 369 in New York and 200 in Minnesota. IBM's reported figures are well below the actual number of jobs the company eliminated in these states, where in recent years it has shuttered, sold off or leveled plants that once employed vast numbers.

By contrast, other employers in the same 12 states reported layoffs last year alone totaling 215,000 people. They ranged from giant Walmart to Ostrom's Mushroom Farms in Washington state.

Whether IBM operated within the rules of the WARN act, which are notoriously fungible, could not be determined because the company declined to provide ProPublica with details on its layoffs.

A Second Act, But Poorer

W ith 35 years at IBM under his belt, Ed Miyoshi had plenty of experience being pushed to take buyouts, or early retirement packages, and refusing them. But he hadn't expected to be pushed last fall.

Miyoshi, of Hopewell Junction, New York, had some years earlier launched a pilot program to improve IBM's technical troubleshooting. With the blessing of an IBM vice president, he was busily interviewing applicants in India and Brazil to staff teams to roll the program out to clients worldwide.

The interviews may have been why IBM mistakenly assumed Miyoshi was a manager, and so emailed him to eliminate the one U.S.-based employee still left in his group.

"That was me," Miyoshi realized.

In his sign-off email to colleagues shortly before Christmas 2016, Miyoshi, then 57, wrote: "I am too young and too poor to stop working yet, so while this is good-bye to my IBM career, I fully expect to cross paths with some of you very near in the future."

He did, and perhaps sooner than his colleagues had expected; he started as a subcontractor to IBM about two weeks later, on Jan. 3.

Miyoshi is an example of older workers who've lost their regular IBM jobs and been brought back as contractors. Some of them -- not Miyoshi -- became contract workers after IBM told them their skills were out of date and no longer needed.

Employment law experts said that hiring ex-employees as contractors can be legally dicey. It raises the possibility that the layoff of the employee was not for the stated reason but perhaps because they were targeted for their age, race or gender.

IBM appears to recognize the problem. Ex-employees say the company has repeatedly told managers -- most recently earlier this year -- not to contract with former employees or sign on with third-party contracting firms staffed by ex-IBMers. But ProPublica turned up dozens of instances where the company did just that.

Only two weeks after IBM laid him off in December 2016, Ed Miyoshi of Hopewell Junction, New York, started work as a subcontractor to the company. But he took a $20,000-a-year pay cut. "I'm not a millionaire, so that's a lot of money to me," he says. (Demetrius Freeman for ProPublica)

Responding to a question in a confidential questionnaire from ProPublica, one 35-year company veteran from New York said he knew exactly what happened to the job he left behind when he was laid off. "I'M STILL DOING IT. I got a new gig eight days after departure, working for a third-party company under contract to IBM doing the exact same thing."

In many cases, of course, ex-employees are happy to have another job, even if it is connected with the company that laid them off.

Henry, the Columbus-based sales and technical specialist who'd been with IBM's "resiliency services" unit, discovered that he'd lost his regular IBM job because the company had purchased an Indian firm that provided the same services. But after a year out of work, he wasn't going to turn down the offer of a temporary position as a subcontractor for IBM, relocating data centers. It got money flowing back into his household and got him back where he liked to be, on the road traveling for business.

The compensation most ex-IBM employees make as contractors isn't comparable. While Henry said he collected the same dollar amount, it didn't include health insurance, which cost him $1,325 a month. Miyoshi said his paycheck is 20 percent less than what he made as an IBM regular.

"I took an over $20,000 hit by becoming a contractor. I'm not a millionaire, so that's a lot of money to me," Miyoshi said.

And lower pay isn't the only problem ex-IBM employees-now-subcontractors face. This year, Miyoshi's payable hours have been cut by an extra 10 "furlough days." Internal documents show that IBM repeatedly furloughs subcontractors without pay, often for two, three or more weeks a quarter. In some instances, the furloughs occur with little advance notice and at financially difficult moments. In one document, for example, it appears IBM managers, trying to cope with a cost overrun spotted in mid-November, planned to dump dozens of subcontractors through the end of the year, the middle of the holiday season.

Former IBM employees now on contract said the company controls costs by notifying contractors in the midst of projects they have to take pay cuts or lose the work. Miyoshi said that he originally started working for his third-party contracting firm for 10 percent less than at IBM, but ended up with an additional 10 percent cut in the middle of 2017, when IBM notified the contractor it was slashing what it would pay.

For many ex-employees, there are few ways out. Henry, for example, sought to improve his chances of landing a new full-time job by seeking assistance to finish a college degree through a federal program designed to retrain workers hurt by offshoring of jobs.

But when he contacted the Ohio state agency that administers the Trade Adjustment Assistance, or TAA, program, which provides assistance to workers who lose their jobs for trade-related reasons, he was told IBM hadn't submitted necessary paperwork. State officials said Henry could apply if he could find other IBM employees who were laid off with him, information that the company doesn't provide.

TAA is overseen by the Labor Department but is operated by states under individual agreements with Washington, so the rules can vary from state to state. But generally employers, unions, state agencies and groups of employers can petition for training help and cash assistance. Labor Department data compiled by the advocacy group Global Trade Watch shows that employers apply in about 40 percent of cases. Some groups of IBM workers have obtained retraining funds when they or their state have applied, but records dating back to the early 1990s show IBM itself has applied for and won taxpayer assistance only once, in 2008, for three Chicago-area workers whose jobs were being moved to India.

Teasing New Jobs

A s IBM eliminated thousands of jobs in 2016, David Carroll, a 52-year-old Austin software engineer, thought he was safe.

His job was in mobile development, the "M" in the company's CAMS strategy. And if that didn't protect him, he figured he was only four months shy of qualifying for a program that gives employees who leave within a year of their three-decade mark access to retiree medical coverage and other benefits.

But the layoff notice Carroll received March 2 gave him three months -- not four -- to come up with another job. Having been a manager, he said he knew the gantlet he'd have to run to land a new position inside IBM.

Still, he went at it hard, applying for more than 50 IBM jobs, including one for a job he'd successfully done only a few years earlier. For his effort, he got one offer -- the week after he'd been forced to depart. He got severance pay but lost access to what would have been more generous benefits.

Edward Kishkill, then 60, of Hillsdale, New Jersey, had made a similar calculation.

A senior systems engineer, Kishkill recognized the danger of layoffs, but assumed he was immune because he was working in systems security, the "S" in CAMS and another hot area at the company.

The precaution did him no more good than it had Carroll. Kishkill received a layoff notice the same day, along with 17 of the 22 people on his systems security team, including Diane Moos. The notice said that Kishkill could look for other jobs internally. But if he hadn't landed anything by the end of May, he was out.

With a daughter who was a senior in high school headed to Boston University, he scrambled to apply, but came up dry. His last day was May 31, 2016.

For many, the fruitless search for jobs within IBM is the last straw, a final break with the values the company still says it embraces. Combined with the company's increasingly frequent request that departing employees train their overseas replacements, it has left many people bitter. Scores of ex-employees interviewed by ProPublica said that managers with job openings told them they weren't allowed to hire from layoff lists without getting prior, high-level clearance, something that's almost never given.

ProPublica reviewed documents that show that a substantial share of recent IBM layoffs have involved what the company calls "lift and shift," lifting the work of specific U.S. employees and shifting it to specific workers in countries such as India and Brazil. For example, a document summarizing U.S. employment in part of the company's global technology services division for 2015 lists nearly a thousand people as layoff candidates, with the jobs of almost half coded for lift and shift.

Ex-employees interviewed by ProPublica said the lift-and-shift process required their extensive involvement. For example, shortly after being notified she'd be laid off, Kishkill's colleague, Moos, was told to help prepare a "knowledge transfer" document and begin a round of conference calls and email exchanges with two Indian IBM employees who'd be taking over her work. Moos said the interactions consumed much of her last three months at IBM.

Next Chapters

W hile IBM has managed to keep the scale and nature of its recent U.S. employment cuts largely under the public's radar, the company drew some unwanted attention during the 2016 presidential campaign, when then-candidate Donald Trump lambasted it for eliminating 500 jobs in Minnesota, where the company has had a presence for a half century, and shifting the work abroad.

The company also has caught flak -- in places like Buffalo, New York ; Dubuque, Iowa ; Columbia, Missouri , and Baton Rouge, Louisiana -- for promising jobs in return for state and local incentives, then failing to deliver. In all, according to public officials in those and other places, IBM promised to bring on 3,400 workers in exchange for as much as $250 million in taxpayer financing but has hired only about half as many.

After Trump's victory, Rometty, in a move at least partly aimed at courting the president-elect, pledged to hire 25,000 new U.S. employees by 2020. Spokesmen said the hiring would increase IBM's U.S. employment total, although, given its continuing job cuts, the addition is unlikely to approach the promised hiring total.

When The New York Times ran a story last fall saying IBM now has more employees in India than the U.S., Barbini, the corporate spokesman, rushed to declare, "The U.S. has always been and remains IBM's center of gravity." But his stream of accompanying tweets and graphics focused as much on the company's record for racking up patents as hiring people.

IBM has long been aware of the damage its job cuts can do to people. In a series of internal training documents to prepare managers for layoffs in recent years, the company has included this warning: "Loss of a job often triggers a grief reaction similar to what occurs after a death."

Most, though not all, of the ex-IBM employees with whom ProPublica spoke have weathered the loss and re-invented themselves.

Marjorie Madfis, the digital marketing strategist, couldn't land another tech job after her 2013 layoff, so she headed in a different direction. She started a nonprofit called Yes She Can Inc. that provides job skills development for young autistic women, including her 21-year-old daughter.

After almost two years of looking and desperate for useful work, Brian Paulson, the widely traveled IBM senior manager, applied for and landed a position as a part-time rural letter carrier in Plano, Texas. He now works as a contract project manager for a Las Vegas gaming and lottery firm.

Ed Alpern, who started at IBM as a Selectric typewriter repairman, watched his son go on to become a National Merit Scholar at Texas A&M University, but not a Watson scholarship recipient.

Lori King, the IT specialist and 33-year IBM veteran who's now 56, got in a parting shot. She added an addendum to the retirement papers the firm gave her that read in part: "It was never my plan to retire earlier than at least age 60 and I am not committing to retire. I have been informed that I am impacted by a resource action effective on 2016-08-22, which is my last day at IBM, but I am NOT retiring."

King has aced more than a year of government-funded coding boot camps and university computer courses, but has yet to land a new job.

David Harlan still lives in Moscow, Idaho, after refusing IBM's "invitation" to move to North Carolina, and is artistic director of the Moscow Art Theatre (Too).

Ed Miyoshi is still a technical troubleshooter working as a subcontractor for IBM.

Ed Kishkill, the senior systems engineer, works part time at a local tech startup, but pays his bills as an associate at a suburban New Jersey Staples store.

This year, Paul Henry was back on the road, working as an IBM subcontractor in Detroit, about 200 miles from where he lived in Columbus. On Jan. 8, he put in a 14-hour day and said he planned to call home before turning in. He died in his sleep.

Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story.

Do you have information about age discrimination at IBM?

Let us know.

Peter Gosselin joined ProPublica as a contributing reporter in January 2017 to cover aging. He has covered the U.S. and global economies for, among others, the Los Angeles Times and The Boston Globe, focusing on the lived experiences of working people. He is the author of "High Wire: The Precarious Financial Lives of American Families."

Ariana Tobin is an engagement reporter at ProPublica, where she works to cultivate communities to inform our coverage. She was previously at The Guardian and WNYC. Ariana has also worked as digital producer for APM's Marketplace and contributed to outlets including The New Republic , On Being , the St. Louis Beacon and Bustle .

Production by Joanna Brenner and Hannah Birch . Art direction by David Sleight . Illustrations by Richard Borge .

[Oct 30, 2018] Elimination of loyalty: what corporations cloak as weeding out the low performers tranparantly reveals catching the older workers in the net as well.

Oct 30, 2018 | features.propublica.org

Great White North, Thursday, March 22, 2018 11:29 PM

There's not a word of truth quoted in this article. That is, quoted from IBM spokespeople. It's the culture there now. They don't even realize that most of their customers have become deaf to the same crap from their Sales and Marketing BS, which is even worse than their HR speak.

The sad truth is that IBM became incapable of taking its innovation (IBM is indeed a world beating, patent generating machine) to market a long time ago. It has also lost the ability (if it ever really had it) to acquire other companies and foster their innovation either - they ran most into the ground. As a result, for nearly a decade revenues have declined and resource actions grown. The resource actions may seem to be the ugly problem, but they're only the symptom of a fat greedy and pompous bureaucracy that's lost its ability to grow and stay relevant in a very competitive and changing industry. What they have been able to perfect and grow is their ability to downsize and return savings as dividends (Big Sam Palmisano's "innovation"). Oh, and for senior management to line their pockets.

Nothing IBM is currently doing is sustainable.

If you're still employed there, listen to the pain in the words of your fallen comrades and don't knock yourself out trying to stay afloat. Perhaps learn some BS of your own and milk your job (career? not...) until you find freedom and better pastures.

If you own stock, do like Warren Buffett, and sell it while it still has some value.

Danllo , Thursday, March 22, 2018 10:43 PM
This is NOTHING NEW! All major corporations have and will do this at some point in their existence. Another industry that does this regularly every 3 to 5 years is the pharamaceutical industry. They'll decimate their sales forces in order to, as they like to put it, "right size" the company.

They'll cloak it as weeding out the low performers, but they'll try to catch the "older" workers in the net as well.

[Oct 30, 2018] Cutting 'Old Heads' at IBM

Notable quotes:
"... I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful. ..."
"... Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters! ..."
"... I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding. ..."
Oct 30, 2018 | features.propublica.org

I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful.

They actually did a presentation of their interim results - but it was a 52 slide package that they had presented to me in my previous job but with the names and numbers changed. see more

DarthVaderMentor dauwkus , Thursday, April 5, 2018 4:43 PM

Intellectual Capital Re-Use! LOL! Not many people realize in IBM that many, if not all of the original IBM Consulting Group materials were made under the Type 2 Materials clause of the IBM Contract, which means the customers actually owned the IP rights of the documents. Can you imagine the mess if just one customer demands to get paid for every re-use of the IP that was developed for them and then re-used over and over again?
NoGattaca dauwkus , Monday, May 7, 2018 5:37 PM
Beautiful! Yea, these companies so fast to push experienced people who have dedicated their lives to the firm - how can you not...all the hours and commitment it takes - way underestimate the power of the network of those left for dead and their influence in that next career gig. Memories are long...very long when it comes to experiences like this.
davosil North_40 , Sunday, March 25, 2018 5:19 PM
True dat! Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters!
Playing Defense North_40 , Tuesday, April 3, 2018 4:41 PM
I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding.

[Oct 30, 2018] American companies pay health insurance premiums based on their specific employee profiles

Notable quotes:
"... As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. ..."
"... The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts. ..."
Oct 30, 2018 | features.propublica.org

sometimestheyaresomewhatright , Thursday, March 22, 2018 4:13 PM

American companies pay health insurance premiums based on their specific employee profiles. Insurance companies compete with each other for the business, but costs are actual. And based on the profile of the pool of employees. So American companies fire older workers just to lower the average age of their employees. Statistically this is going to lower their health care costs.

As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. They have an incentive to fire sick employees and employees with genetic risks. Those are harder to implement as ways to lower costs. Firing older employees is simple to do, just look up their ages.

The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts.

By the way, most tech companies are actually run by older people. The goal is to broom out mid-level people based on age. Nobody is going to suggest to a sixty year old president that they should self fire, for the good of the company.

[Oct 30, 2018] It s all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte

Notable quotes:
"... It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon. ..."
"... I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers". ..."
"... 1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans. ..."
"... Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce. ..."
"... It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster ..."
"... Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's. ..."
"... Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed. ..."
"... Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month. ..."
"... You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee. ..."
Oct 30, 2018 | disqus.com

dragonflap7 months ago I'm a 49-year-old SW engineer who started at IBM as part of an acquisition in 2000. I got laid off in 2002 when IBM started sending reqs to Bangalore in batches of thousands. After various adventures, I rejoined IBM in 2015 as part of the "C" organization referenced in the article.

It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon.

Stewart Dean7 months ago ,

The lead-in to this piece makes it sound like IBM was forced into these practices by inescapable forces. I'd say not, rather that it pursued them because a) the management was clueless about how to lead IBM in the new environment and new challenges so b) it started to play with numbers to keep the (apparent) profits up....to keep the bonuses coming. I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers".

And then there's the Pig with the Wooden Leg shaggy dog story that ends with the punch line, "A pig like that you don't eat all at once", which has a lot of the flavor of how many of us saw our jobs as IBM die a slow death.

IBM is about to fall out of the sky, much as General Motors did. How could that happen? By endlessly beating the cow to get more milk.

IBM was hiring right through the Great Depression such that It Did Not Pay Unemployment Insurance. Because it never laid people off, Because until about 1990, your manager was responsible for making sure you had everything you needed to excel and grow....and you would find people that had started on the loading dock and had become Senior Programmers. But then about 1990, IBM starting paying unemployment insurance....just out of the goodness of its heart. Right.

CRAW Stewart Dean7 months ago ,

1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans.

DDRLSGC Stewart Dean7 months ago ,

Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce.

Georgann Putintsev Stewart Dean7 months ago ,

I found that other Ex-IBMer's respect other Ex-IBMer's work ethics, knowledge and initiative.

Other companies are happy to get them as a valueable resource. In '89 when our Palo Alto Datacenter moved, we were given two options: 1.) to become a Programmer (w/training) 2.) move to Boulder or 3.) to leave.

I got my training with programming experience and left IBM in '92, when for 4 yrs IBM offerred really good incentives for leaving the company. The Executives thought that the IBM Mainframe/MVS z/OS+ was on the way out and the Laptop (Small but Increasing Capacity) Computer would take over everything.

It didn't. It did allow many skilled IBMers to succeed outside of IBM and help built up our customer skill sets. And like many, when the opportunity arose to return I did. In '91 I was accidentally given a male co-workers paycheck and that was one of the reasons for leaving. During my various Contract work outside, I bumped into other male IBMer's that had left too, some I had trained, and when they disclosed that it was their salary (which was 20-40%) higher than mine was the reason they left, I knew I had made the right decision.

Women tend to under-value themselves and their capabilities. Contracting also taught me that companies that had 70% employees and 30% contractors, meant that contractors would be let go if they exceeded their quarterly expenditures.

I first contracted with IBM in '98 and when I decided to re-join IBM '01, I had (3) job offers and I took the most lucrative exciting one to focus on fixing & improving DB2z Qry Parallelism. I developed a targeted L3 Technical Change Team to help L2 Support reduce Customer problems reported and improve our product. The instability within IBM remained and I saw IBM try to eliminate aging, salaried, benefited employees. The 1.) find a job within IBM ... to 2.) to leave ... was now standard.

While my salary had more than doubled since I left IBM the first time, it still wasn't near other male counterparts. The continual rating competition based on salary ranged titles and timing a title raise after a round of layoffs, not before. I had another advantage going and that was that my changed reduced retirement benefits helped me stay there. It all comes down to the numbers that Mgmt is told to cut & save IBM. While much of this article implies others were hired, at our Silicon Valley Location and other locations, they had no intent to backfill. So the already burdened employees were laden with more workloads & stress.

In the early to mid 2000's IBM setup a counter lab in China where they were paying 1/4th U.S. salaries and many SVL IBMers went to CSDL to train our new world 24x7 support employees. But many were not IBM loyal and their attrition rates were very high, so it fell to a wave of new-hires at SVL to help address it.

Stewart Dean Georgann Putintsev7 months ago ,

It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster

IBM32_retiree • 7 months ago ,

Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's.

Dan Yurman7 months ago ,

Bravo ProPublica for another "sock it to them" article - journalism in honor of the spirit of great newspapers everywhere that the refuge of justice in hard times is with the press.

Felix Domestica7 months ago ,

Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed.

RonF Felix Domestica7 months ago ,

Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month.

mjmadfis RonF7 months ago ,

When I was let go in June 2013 it was 6 months severance.

Terry Taylor7 months ago ,

You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee.

weelittlepeople Terry Taylor7 months ago ,

Good Ol Ma Bell is following the IBM playbook to a Tee

emnyc7 months ago ,

ProPublica deserves a Pulitzer for this article and all the extensive research that went into this investigation.

Incredible job! Congrats.

On a separate note, IBM should be ashamed of themselves and the executive team that enabled all of this should be fired.

WmBlake7 months ago ,

As a permanent old contractor and free-enterprise defender myself, I don't blame IBM a bit for wanting to cut the fat. But for the outright *lies, deception and fraud* that they use to break laws, weasel out of obligations... really just makes me want to shoot them... and I never even worked for them.

Michael Woiwood7 months ago ,

Great Article.

Where I worked, In Rochester,MN, people have known what is happening for years. My last years with IBM were the most depressing time in my life.

I hear a rumor that IBM would love to close plants they no longer use but they are so environmentally polluted that it is cheaper to maintain than to clean up and sell.

scorcher147 months ago ,

One of the biggest driving factors in age discrimination is health insurance costs, not salary. It can cost 4-5x as much to insure and older employee vs. a younger one, and employers know this. THE #1 THING WE CAN DO TO STOP AGE DISCRIMINATION IS TO MOVE AWAY FROM OUR EMPLOYER-PROVIDED INSURANCE SYSTEM. It could be single-payer, but it could also be a robust individual market with enough pool diversification to make it viable. Freeing employers from this cost burden would allow them to pick the right talent regardless of age.

DDRLSGC scorcher147 months ago ,

The American business have constantly fought against single payer since the end of World War II and why should I feel sorry for them when all of a sudden, they are complaining about health care costs? It is outrageous that workers have to face age discrimination; however, the CEOs don't have to deal with that issue since they belong to a tiny group of people who can land a job anywhere else.

pieinthesky scorcher147 months ago ,

Single payer won't help. We have single payer in Canada and just as much age discrimination in employment. Society in general does not like older people so unless you're a doctor, judge or pharmacist you will face age bias. It's even worse in popular culture never mind in employment.

OrangeGina scorcher147 months ago ,

I agree. Yet, a determined company will find other methods, explanations and excuses.

JohnCordCutter7 months ago ,

Thanks for the great article. I left IBM last year. USA based. 49. Product Manager in one of IBMs strategic initiatives, however got told to relocate or leave. I found another job and left. I came to IBM from an acquisition. My only regret is, I wish I had left this toxic environment earlier. It truely is a dreadful place to work.

60 Soon • 7 months ago ,

The methodology has trickled down to smaller companies pursuing the same net results for headcount reduction. The similarities to my experience were painful to read. The grief I felt after my job was "eliminated" 10 years ago while the Recession was at its worst and shortly after my 50th birthday was coming back. I never have recovered financially but have started writing a murder mystery. The first victim? The CEO who let me go. It's true. Revenge is best served cold.

donttreadonme97 months ago ,

Well written . people like me have experienced exactly what you wrote. IBM is a shadow of it's former greatness and I have advised my children to stay away from IBM and companies like it as they start their careers. IBM is a corrupt company. Shame on them !

annapurna7 months ago ,

I hope they find some way to bring a class action lawsuit against these assholes.

Mark annapurna7 months ago ,

I suspect someone will end up hunt them down with an axe at some point. That's the only way they'll probably learn. I don't know about IBM specifically, but when Carly Fiorina ran HP, she travelled with and even went into engineering labs with an armed security detail.

OrangeGina Mark7 months ago ,

all the bigwig CEOs have these black SUV security details now.

Sarahw7 months ago ,

IBM has been using these tactics at least since the 1980s, when my father was let go for similar 'reasons.'

Vin7 months ago ,

Was let go after 34 years of service. Mine Resource Action latter had additional lines after '...unless you are offered ... position within IBM before that date.' , implying don't even try to look for a position. They lines were ' Additional business controls are in effect to manage the business objectives of this resource action, therefore, job offers within (the name of division) will be highly unlikely.'.

Mark Vin7 months ago ,

Absolutely and utterly disgusting.

Greybeard7 months ago ,

I've worked for a series of vendors for over thirty years. A job at IBM used to be the brass ring; nowadays, not so much.

I've heard persistent rumors from IBMers that U.S. headcount is below 25,000 nowadays. Given events like the recent downtime of the internal systems used to order parts (5 or so days--website down because staff who maintained it were let go without replacements), it's hard not to see the spiral continue down the drain.

What I can't figure out is whether Rometty and cronies know what they're doing or are just clueless. Either way, the result is the same: destruction of a once-great company and brand. Tragic.

ManOnTheHill Greybeard7 months ago ,

Well, none of these layoffs/ageist RIFs affect the execs, so they don't see the effects, or they see the effects but attribute them to some other cause.

(I'm surprised the article doesn't address this part of the story; how many affected by layoffs are exec/senior management? My bet is very few.)

ExIBMExec ManOnTheHill7 months ago ,

I was a D-banded exec (Director-level) who was impacted and I know even some VPs who were affected as well, so they do spread the pain, even in the exec ranks.

ManOnTheHill ExIBMExec7 months ago ,

That's different than I have seen in companies I have worked for (like HP). There RIFs (Reduction In Force, their acronym for layoff) went to the director level and no further up.

[Oct 30, 2018] Cutting Old Heads at IBM by Peter Gosselin and Ariana Tobin

Mar 22, 2018 | features.propublica.org

This story was co-published with Mother Jones.

F or nearly a half century, IBM came as close as any company to bearing the torch for the American Dream.

As the world's dominant technology firm, payrolls at International Business Machines Corp. swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty.

How the Crowd Led Us to Investigate IBM

Our project started with a digital community of ex-employees. Read more about how we got this story.

Email Updates

Sign up to get ProPublica's major investigations delivered to your inbox.

Do you have information about age discrimination at IBM?

Let us know.

But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years.

In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

Among ProPublica's findings, IBM:

Denied older workers information the law says they need in order to decide whether they've been victims of age bias, and required them to sign away the right to go to court or join with others to seek redress. Targeted people for layoffs and firings with techniques that tilted against older workers, even when the company rated them high performers. In some instances, the money saved from the departures went toward hiring young replacements. Converted job cuts into retirements and took steps to boost resignations and firings. The moves reduced the number of employees counted as layoffs, where high numbers can trigger public disclosure requirements. Encouraged employees targeted for layoff to apply for other IBM positions, while quietly advising managers not to hire them and requiring many of the workers to train their replacements. Told some older employees being laid off that their skills were out of date, but then brought them back as contract workers, often for the same work at lower pay and fewer benefits.

IBM declined requests for the numbers or age breakdown of its job cuts. ProPublica provided the company with a 10-page summary of its findings and the evidence on which they were based. IBM spokesman Edward Barbini said that to respond the company needed to see copies of all documents cited in the story, a request ProPublica could not fulfill without breaking faith with its sources. Instead, ProPublica provided IBM with detailed descriptions of the paperwork. Barbini declined to address the documents or answer specific questions about the firm's policies and practices, and instead issued the following statement:

"We are proud of our company and our employees' ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years."

With nearly 400,000 people worldwide, and tens of thousands still in the U.S., IBM remains a corporate giant. How it handles the shift from its veteran baby-boom workforce to younger generations will likely influence what other employers do. And the way it treats its experienced workers will eventually affect younger IBM employees as they too age.

Fifty years ago, Congress made it illegal with the Age Discrimination in Employment Act , or ADEA, to treat older workers differently than younger ones with only a few exceptions, such as jobs that require special physical qualifications. And for years, judges and policymakers treated the law as essentially on a par with prohibitions against discrimination on the basis of race, gender, sexual orientation and other categories.

In recent decades, however, the courts have responded to corporate pleas for greater leeway to meet global competition and satisfy investor demands for rising profits by expanding the exceptions and shrinking the protections against age bias .

"Age discrimination is an open secret like sexual harassment was until recently," said Victoria Lipnic, the acting chair of the Equal Employment Opportunity Commission, or EEOC, the independent federal agency that administers the nation's workplace anti-discrimination laws.

"Everybody knows it's happening, but often these cases are difficult to prove" because courts have weakened the law, Lipnic said. "The fact remains it's an unfair and illegal way to treat people that can be economically devastating."

Many companies have sought to take advantage of the court rulings. But the story of IBM's downsizing provides an unusually detailed portrait of how a major American corporation systematically identified employees to coax or force out of work in their 40s, 50s and 60s, a time when many are still productive and need a paycheck, but face huge hurdles finding anything like comparable jobs.

The dislocation caused by IBM's cuts has been especially great because until recently the company encouraged its employees to think of themselves as "IBMers" and many operated under the assumption that they had career-long employment.

When the ax suddenly fell, IBM provided almost no information about why an employee was cut or who else was departing, leaving people to piece together what had happened through websites, listservs and Facebook groups such as "Watching IBM" or "Geographically Undesirable IBM Marketers," as well as informal support groups.

Marjorie Madfis, at the time 57, was a New York-based digital marketing strategist and 17-year IBM employee when she and six other members of her nine-person team -- all women in their 40s and 50s -- were laid off in July 2013. The two who remained were younger men.

Since her specialty was one that IBM had said it was expanding, she asked for a written explanation of why she was let go. The company declined to provide it.

"They got rid of a group of highly skilled, highly effective, highly respected women, including me, for a reason nobody knows," Madfis said in an interview. "The only explanation is our age."

Brian Paulson, also 57, a senior manager with 18 years at IBM, had been on the road for more than a year overseeing hundreds of workers across two continents as well as hitting his sales targets for new services, when he got a phone call in October 2015 telling him he was out. He said the caller, an executive who was not among his immediate managers, cited "performance" as the reason, but refused to explain what specific aspects of his work might have fallen short.

It took Paulson two years to land another job, even though he was equipped with an advanced degree, continuously employed at high-level technical jobs for more than three decades and ready to move anywhere from his Fairview, Texas, home.

"It's tough when you've worked your whole life," he said. "The company doesn't tell you anything. And once you get to a certain age, you don't hear a word from the places you apply."

Paul Henry, a 61-year-old IBM sales and technical specialist who loved being on the road, had just returned to his Columbus home from a business trip in August 2016 when he learned he'd been let go. When he asked why, he said an executive told him to "keep your mouth shut and go quietly."

Henry was jobless more than a year, ran through much of his savings to cover the mortgage and health insurance and applied for more than 150 jobs before he found a temporary slot.

"If you're over 55, forget about preparing for retirement," he said in an interview. "You have to prepare for losing your job and burning through every cent you've saved just to get to retirement."

IBM's latest actions aren't anything like what most ex-employees with whom ProPublica talked expected from their years of service, or what today's young workers think awaits them -- or are prepared to deal with -- later in their careers.

"In a fast-moving economy, employers are always going to be tempted to replace older workers with younger ones, more expensive workers with cheaper ones, those who've performed steadily with ones who seem to be up on the latest thing," said Joseph Seiner, an employment law professor at the University of South Carolina and former appellate attorney for the EEOC.

"But it's not good for society," he added. "We have rules to try to maintain some fairness in our lives, our age-discrimination laws among them. You can't just disregard them."

[Oct 30, 2018] Red Hat hired the CentOS developers 4.5-years ago

Oct 30, 2018 | linux.slashdot.org

quantaman ( 517394 ) , Sunday October 28, 2018 @04:22PM ( #57550805 )

Re:Well at least we'll still have Cent ( Score: 4 , Informative)
Fedora is fully owned by Red Hat and CentOS requires the availability of the Red Hat repositories which they aren't obliged to make public to non-customers..

Fedora is fully under Red Hat's control. It's used as a bleeding edge distro for hobbyists and as a testing ground for code before it goes into RHEL. I doubt its going away since it does a great job of establishing mindshare but no business in their right mind is going to run Fedora in production.

But CentOS started as a separate organization with a fairly adversarial relationship to Red Hat since it really is free RHEL which cuts into their actual customer base. They didn't need Red Hat repos back then, just the code which they rebuilt from scratch (which is why they were often a few months behind).

If IBM kills CentOS a new one will pop up in a week, that's the beauty of the GPL.

Luthair ( 847766 ) , Sunday October 28, 2018 @04:22PM ( #57550799 )
Re:Well at least we'll still have Cent ( Score: 3 )

Red Hat hired the CentOS developers 4.5-years ago.

[Oct 30, 2018] We run just about everything on CentOS around here, downstream of RHEL. Should we be worried?

Oct 30, 2018 | arstechnica.com

Muon , Ars Scholae Palatinae 6 hours ago Popular

We run just about everything on CentOS around here, downstream of RHEL. Should we be worried? 649 posts | registered 1/26/2009
brandnewmath , Smack-Fu Master, in training et Subscriptor 6 hours ago Popular
We'll see. Companies in an acquisition always rush to explain how nothing will change to reassure their customers. But we'll see.
Kilroy420 , Ars Tribunus Militum 6 hours ago Popular
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no? 1648 posts | registered 4/3/2012

dorkbert , Ars Tribunus Militum 6 hours ago Popular
My personal observation of IBM over the past 30 years or so is that everything it acquires dies horribly.
barackorama , Smack-Fu Master, in training 6 hours ago
...IBM's own employees see it as a company in free fall. This is not good news.

In other news, property values in Raleigh will rise even more...

Moodyz , Ars Centurion 6 hours ago Popular
Quote:
This is fine

Looking back at what's happened with many of IBM's past acquisitions, I'd say no, not quite fine.
I am not your friend , Wise, Aged Ars Veteran et Subscriptor 6 hours ago Popular
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.
jandrese , Ars Tribunus Angusticlavius et Subscriptor 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

IBM has been fumbling around for a while. They didn't know how to sell Watson as they sold it like a weird magical drop in service and it failed repeatedly, where really it should be a long term project that you bring customer's along for the ride...

I had a buddy using their cloud service and they went to spin up servers and IBM was all "no man we have to set them up first".... like that's not cloud IBM...

If IBM can't figure out how to sell its own services I'm not sure the powers that be are capable of getting the job done ever. IBM's own leadership seems incompatible with the state of the world.

IBM basically bought a ton of service contracts for companies all over the world. This is exactly what the suits want: reliable cash streams without a lot of that pesky development stuff.

IMHO this is perilous for RHEL. It would be very easy for IBM to fire most of the developers and just latch on to the enterprise services stuff to milk it till its dry.

skizzerz , Wise, Aged Ars Veteran et Subscriptor 6 hours ago
toturi wrote:
I can only see this as a net positive - the ability to scale legacy mainframes onto "Linux" and push for even more security auditing.

I would imagine the RHEL team will get better funding but I would be worried if you're a centos or fedora user.

I'm nearly certain that IBM's management ineptitude will kill off Fedora and CentOS (or at least severely gimp them compared to how they currently are), not realizing how massively important both of these projects are to the core RHEL product. We'll see RHEL itself suffer as a result.

I normally try to understand things with an open mindset, but in this case, IBM has had too long of a history of doing things wrong for me to trust them. I'll be watching this carefully and am already prepping to move off of my own RHEL servers once the support contract expires in a couple years just in case it's needed.

Iphtashu Fitz , Ars Scholae Palatinae 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

TomXP411 , Ars Tribunus Angusticlavius 6 hours ago Popular
Iphtashu Fitz wrote:
50me12 wrote:
blockquote> Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

That was my thought. IBM wants to own an operating system again. With AIX being relegated to obscurity, buying Red Hat is simpler than creating their own Linux fork.

anon_lawyer , Wise, Aged Ars Veteran 6 hours ago Popular
Valuing Red Hat at $34 billion means valuing it at more than 1/4 of IBMs current market cap. From my perspective this tells me IBM is in even worse shape than has been reported.
dmoan , Ars Centurion 6 hours ago
I am not your friend wrote:
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Redhat made 258 million in income last year so they paid over 100 times its net income that's crazy valuation here...

[Oct 30, 2018] I have worked at IBM 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions

Notable quotes:
"... Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks. ..."
Oct 30, 2018 | features.propublica.org

Buzz , Friday, March 23, 2018 12:00 PM

I've worked there 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions. Pay raises have been non existent or well under inflation for years.

Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks.

We can't keep millennials because of pay, benefits and the expectation of being available 24/7 because we're shorthanded. As the unemployment rate drops, more leave to find a different job, leaving the old people as they are less willing to start over with pay, vacation, moving, selling a house, pulling kids from school, etc.

The younger people are generally less likely to be willing to work as needed on off hours or to pull work from a busier colleague.

I honestly have no idea what the plan is when the people who know what they are doing start to retire, we are way top heavy with 30-40 year guys who are on their way out, very few of the 10-20 year guys due to hiring freezes and we can't keep new people past 2-3 years. It's like our support business model is designed to fail.

[Oct 30, 2018] Will systemd become standard on mainframes as well?

It will be interesting to see what happens in any case.
Oct 30, 2018 | theregister.co.uk

Doctor Syntax , 1 day

So now it becomes Blue Hat. Will systemd become standard on mainframes as well?
DCFusor , 15 hrs
@Doctor

Maybe we get really lucky and they RIF Lennart Poettering or he quits? I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Anonymous Coward , 15 hrs
I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Quite the contrary. IBM is run and managed by prima donnas and personality cults.

Waseem Alkurdi , 18 hrs
Re: Poettering

OS/2 and Poettering? Best joke I've ever heard!

(It'd be interesting if somebody locked them both up in an office and see what happens!)

Glen Turner 666 , 16 hrs
Re: Patents

IBM already had access to Red Hat's patents, including for patent defence purposes. Look up "open innovation network".

This acquisition is about: (1) IBM needing growth, or at least a plausible scenario for growth. (2) Red Hat wanting an easy expansion of its sales channels, again for growth. (3) Red Hat stockholders being given an offer they can't refuse.

This acquisition is not about: cultural change at IBM. Which is why the acquisition will 'fail'. The bottom line is that engineering matters at the moment (see: Google, Amazon), and IBM sacked their engineering culture across the past two decades. To be successful IBM need to get that culture back, and acquiring Red Hat gives IBM the opportunity to create a product-building, client-service culture within IBM. Except that IBM aren't taking the opportunity, so there's a large risk the reverse will happen -- the acquisition will destroy Red Hat's engineering- and service-oriented culture.

Anonymous Coward , 1 day
The kraken versus the container ship

This could be interesting: will the systemd kraken manage to wrap its tentacles around the big blue container ship and bring it to a halt, or will the container ship turn out to be well armed and fatally harpoon the kraken (causing much rejoicing in the rest of the Linux world)?

Sitaram Chamarty , 1 day
disappointed...

Honestly, this is a time for optimism: if they manage to get rid of Lennart Poettering, everything else will be tolerable!

dbtx , 1 day
*if*

you can change the past so that a "proper replacement" isn't automatically expected to do lots of things that systemd does. That damage is done. We got "better is worse" and enough people liked it-- good luck trying to go back to "worse is better"

tfb , 13 hrs
I presume they were waiting to see what happened to Solaris. When Oracle bought Sun (presumably the only other company who might have bought them was IBM) there were really three enterprise unixoid platforms: Solaris, AIX and RHEL (there were some smaller ones and some which were clearly dying like HPUX). It seemed likely at the time, but not yet certain, that Solaris was going to die (I worked for Sun at the time this happened and that was my opinion anyway). If Solaris did die, then if one company owned both AIX and RHEL then that company would own the enterprise unixoid market. If Solaris didn't die on the other hand then RHEL would be a lot less valuable to IBM as there would be meaningful competition. So, obviously, they waited to see what would happen.

Well, Solaris is perhaps not technically quite dead yet but certainly is moribund, and IBM now owns both AIX and RHEL & hence the enterprise unixoid market. As an interesting side-note, unless Oracle can keep Solaris on life-support this means that IBM own all-but own Oracle's OS as well ('Oracle Linux' is RHEL with, optionally, some of their own additions to the kernel).

[Oct 30, 2018] IBM's Red Hat acquisition is a 'desperate deal,' says analyst

Notable quotes:
"... "It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball." ..."
Oct 30, 2018 | www.cnbc.com
IBM's $34 billion acquisition of Red Hat is a last-ditch effort by IBM to play catch-up in the cloud industry, analyst Joel Fishbein told CNBC on Monday.

"It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball."

This is IBM's largest deal ever and the third-biggest tech deal in the history of the United States. IBM is paying more than a 60 percent premium for the software maker, but CEO Ginni Rometty told CNBC earlier in the day it was a "fair price."

[Oct 30, 2018] Sam Palmisano now infamous Roadmap 2015 ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Oct 30, 2018 | features.propublica.org

GoingGone , Friday, April 13, 2018 6:06 PM

As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service.

All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done.

The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.

People (and companies) who have nothing to hide, hide nothing. People (and companies) who are proud of their actions, share it proudly. IBM believes it is being clever and outsmarting employment discrimination laws and saving the company money while retooling its workforce. That may end up being so (but probably won't), but it's irrelevant. Through its practices, IBM has lost the trust of its employees, customers, and ironically, stockholders (just ask Warren Buffett), who are the very(/only) audience IBM was trying to impress. It's just a huge shame.

HiJinks , Sunday, March 25, 2018 3:07 AM
I agree with many who state the report is well done. However, this crap started in the early 1990s. In the late 1980s, IBM offered decent packages to retirement eligible employees. For those close to retirement age, it was a great deal - 2 weeks pay for every year of service (capped at 26 years) plus being kept on to perform their old job for 6 months (while collecting retirement, until the government stepped in an put a halt to it). Nobody eligible was forced to take the package (at least not to general knowledge). The last decent package was in 1991 - similar, but not able to come back for 6 months. However, in 1991, those offered the package were basically told take it or else. Anyone with 30 years of service or 15 years and 55 was eligible and anyone within 5 years of eligibility could "bridge" the difference. They also had to sign a form stating they would not sue IBM in order to get up to a years pay - not taxable per IRS documents back then (but IBM took out the taxes anyway and the IRS refused to return - an employee group had hired lawyers to get the taxes back, a failed attempt which only enriched the lawyers). After that, things went downhill and accelerated when Gerstner took over. After 1991, there were still a some workers who could get 30 years or more, but that was more the exception. I suspect the way the company has been run the past 25 years or so has the Watsons spinning in their graves. Gone are the 3 core beliefs - "Respect for the individual", "Service to the customer" and "Excellence must be a way of life".
ArnieTracey , Saturday, March 24, 2018 7:15 PM
IBM's policy reminds me of the "If a citizen = 30 y.o., then mass execute such, else if they run then hunt and kill them one by one" social policy in the Michael York movie "Logan's Run."

From Wiki, in case you don't know: "It depicts a utopian future society on the surface, revealed as a dystopia where the population and the consumption of resources are maintained in equilibrium by killing everyone who reaches the age of 30. The story follows the actions of Logan 5, a "Sandman" who has terminated others who have attempted to escape death, and is now faced with termination himself."

Jr Jr , Saturday, March 24, 2018 4:37 PM
Corporate loyalty has been gone for 25 years. This isnt surprising. But this age discrimination is blatantly illegal.

[Oct 30, 2018] This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

Oct 30, 2018 | arstechnica.com

afidel, 2018-10-29T13:17:22-04:00

tipoo wrote:
Kilroy420 wrote:
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.

A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

[Oct 30, 2018] The insttuinaliuzed stupidity of IBM brass is connected with the desire to get bonuses

Oct 30, 2018 | arstechnica.com

3 hours ago afidel wrote: show nested quotes Kilroy420 wrote: Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.
A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

OK. I did 10 years at IBM Boulder..

The problem isn't the purchase price or the probable write-down later.

The problem is going to be with the executives above it. One thing I noticed at IBM is that the executives needed to put their own stamp on operations to justify their bonuses. We were on a 2 year cycle of execs coming in and saying "Whoa.. things are too centralized, we need to decentralize", then the next exec coming in and saying "things are too decentralized, we need to centralize".

No IBM exec will get a bonus if they are over RedHat and exercise no authority over it. "We left it alone" generates nothing for the PBC. If they are in the middle of a re-org, then the specific metrics used to calculate their bonus can get waived. (Well, we took an unexpected hit this year on sales because we are re-orging to better optimize our resources). With that P/E, no IBM exec is going to get a bonus based on metrics. IBM execs do *not* care about what is good for IBM's business. They are all about gaming the bonuses. Customers aren't even on the list of things they care about.

I am reminded of a coworker who quit in frustration back in the early 2000's due to just plain bad management. At the time, IBM was working on Project Monterey. This was supposed to be a Unix system across multiple architectures. My coworker sent his resignation out to all hands basically saying "This is stupid. we should just be porting Linux". He even broke down the relative costs. Billions for Project Monterey vs thousands for a Linux port. Six months later, we get an email from on-high announcing this great new idea that upper management had come up with. It would be far cheaper to just support Linux than write a new OS.. you'd think that would be a great thing, but the reality is that all it did was create the AIX 5L family, which was AIX 5 with an additional CD called Linux ToolBox, which was loaded with a few Linux programs ported to a specific version of AIX, but never kept current. IBM can make even great decisions into bad decisions.

In May 2007, IBM announced the transition to LEAN. Sounds great, but this LEAN was not on the manufacturing side of the equation. It was in e-Business under Global Services. The new procedures were basically call center operations. Now, prior to this, IBM would have specific engineers for specific accounts. So, Major Bank would have that AIX admin, that Sun admin, that windows admin, etc. They knew who to call and those engineers would have docs and institutional knowledge of that account. During the LEAN announcement, Bob Moffat described the process. Accounts would now call an 800 number and the person calling would open a ticket. This would apply to *any* work request as all the engineers would be pooled and whoever had time would get the ticket. So, reset a password - ticket. So, load a tape - ticket. Install 20 servers - ticket.

Now, the kicker to this was that the change was announced at 8AM and went live at noon. IBM gave their customers who represented over $12 Billion in contracts 4 *hours* notice that they were going to strip their support teams and treat them like a call center. (I will leave it as an exercise to the reader to determine if they would accept that kind of support after spending hundreds of millions on a support contract).

(The pilot program for the LEAN process had its call center outsourced overseas, if that helps you try to figure out why IBM wanted to get rid of dedicated engineers and move to a call-center operation).

[Oct 30, 2018] Presumably the acquisition will have to jump various regulatory hurdles before it is set in stone. If it is successful, Red Hat will be absorbed into IBM's Hybrid Cloud unit

IBM will have to work hard to overcome RH customers' natural (and IMHO largely justified) suspicion of Big Blue.
Notable quotes:
"... focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office. ..."
"... The acquisition of Sun Microsystems by Oracle comes to mind there. ..."
"... When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server. ..."
Oct 30, 2018 | theregister.co.uk

...That transformation has led to accusations of Big Blue ditching its older staff for newer workers to somehow spark some new energy within it. It also cracked down on remote employees , and focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office.

Ledswinger, 1 day

Easy, the same way they deal with their existing employees. It'll be the IBM way or the highway. We'll see the usual repetitive and doomed IBM strategy of brutal downsizings accompanied by the earnest IBM belief that a few offshore wage slaves can do as good a job as anybody else.

The product and service will deteriorate, pricing will have to go up significantly to recover the tens of billions of dollars of "goodwill" that IBM have just splurged, and in five years time we'll all be saying "remember how the clueless twats at IBM bought Red Hat and screwed it up?"

One of IBM's main problems is lack of revenue, and yet Red Hat only adds about $3bn to their revenue. As with most M&A the motivators here are a surplus of cash and hopeless optimism, accompanied by the suppression of all common sense.

Well done Gini, it's another winner.

TVU, 12 hrs
Re: "they will buy a lot of talent."

"What happens over the next 12 -- 24 months will be ... interesting. Usually the acquisition of a relatively young, limber outfit with modern product and service by one of the slow-witted traditional brontosaurs does not end well"

The acquisition of Sun Microsystems by Oracle comes to mind there.

When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server.

[Oct 30, 2018] About time: Red Hat support was bad alread

Oct 30, 2018 | theregister.co.uk
Anonymous Coward, 15 hrs

and the support goes bad already...

Someone (RH staff, or one of the Gods) is unhappy with the deal: Red Hat's support site has been down all day.

https://status.redhat.com

[Oct 30, 2018] Purple rain will fall from that blue cloud

Notable quotes:
"... IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's. ..."
"... From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums ..."
"... You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak. ..."
Oct 30, 2018 | theregister.co.uk

Anonymous Coward , 6 hrs

Re: So exactly how is IBM going to tame employees that are used to going to work in shorts...

Purple rain will fall from that cloud...

asdf , 5 hrs
Wow bravo brutal and accurate. +1
I can't believe its not butter , 1 day
Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

Anonymous Coward , 1 day
Re: Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

That was my immediate thought upon hearing this. I've already worked for IBM, swore never to do it again. Time to dust off & update the resume.

Jove , 19 hrs
Re: Redhat employees - get out now

Another major corporate splashes out a fortune on a star business only to find the clash of cultures destroys substantial value.

W@ldo , 1 day
Re: At least is isnt oracle or M$

Sort of the lesser of evils---do you want to be shot or hung by the neck? No good choice for this acquisition.

Anonymous Coward , 1 day
Re: At least is isnt oracle or M$

honestly, MS would be fine. They're big into Linux and open source and still a heavily pro-engineering company.

Companies like Oracle and IBM are about nothing but making money. Which is why they're both going down the tubes. No-one who doesn't already have them goes near them.

Uncle Ron , 14 hrs
Re: At least is isnt oracle or M$

IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's.

Orv , 9 hrs
Re: At least it is not Oracle or M$

The OS is from Linus and chums, Redhat adds a few storage bits and some Redhat logos and erm.......

From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums. This market is fairly insensitive to price, which is good for a company like RedHat. (Although there has been an exodus of higher education customers as the price has gone up; like Sun did back in the day, they've been squeezing out that market. Two campuses I've worked for have switched wholesale to CentOS.)

Jove , 11 hrs
RedHat take-over IBM - @HmmmYes

"Just compare the share price of RH v IBM"

You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak.

[Oct 30, 2018] Pay your licensing fee

Oct 30, 2018 | linux.slashdot.org

red crab ( 1044734 ) , Monday October 29, 2018 @12:10AM ( #57552797 )

Re:Pay your licensing fee ( Score: 4 , Interesting)

Footnote: $699 License Fee applies to your systemP server running RHEL 7 with 4 cores activated for one year.

To activate additional processor cores on the systemP server, a fee of $199 per core applies. systemP offers a new Semi-Activation Mode now. In systemP Semi-Activation Mode, you will be only charged for all processor calls exceeding 258 MIPS, which will be processed by additional semi-activated cores on a pro-rata basis.

RHEL on systemP servers also offers a Partial Activation Mode, where additional cores can be activated in Inhibited Efficiency Mode.

To know more about Semi-Activation Mode, Partial Activation Mode and Inhibited Efficiency Mode, visit http://www.ibm.com/systemp [ibm.com] or contact your IBM systemP Sales Engineer.

[Oct 30, 2018] $34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC

Notable quotes:
"... I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money... ..."
Oct 30, 2018 | theregister.co.uk

ratfox , 1 day

$34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC. I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money...

[Oct 30, 2018] "OMG" comments rests on three assumptions: Red Hat is 100% brilliant and speckless, IBM is beyond hope and unchangeable, this is a hostile takeover

Notable quotes:
"... But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast". ..."
"... So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM. ..."
"... Many of its best employees will be hostile to IBM ..."
"... And just like a gas giant, IBM can be considered a failed star ..."
Oct 30, 2018 | theregister.co.uk

LeoP , 1 day

Less pessimistic here

Quite a lot of the "OMG" moments rests on three assumptions:

  • Red Hat is 100% brilliant and speckless
  • IBM is beyond hope and unchangeable
  • This is a hostile takeover

I beg to differ on all counts. Call me beyond hope myself because of my optimism, but I do think what IBM bought most is a way to run a business. RH is just too big to be borged into a failing giant without leaving quite a substantial mark.

Ledswinger , 1 day
Re: Less pessimistic here

I beg to differ on all counts.

Note: I didn't downvote you, its a valid argument. I can understand why you think that, because I'm in the minority that think the IBM "bear" case is overdone. They've been cleaning their stables for some years now, and that means dropping quite a lot of low margin business, and seeing the topline shrink. That attracts a lot of criticism, although it is good business sense.

But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast".

And (speaking as a strategist) that's 100% true, and 150% true when doing M&A.

So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM.

Many of its best employees will be hostile to IBM . RedHat will be borged, and it will leave quite a mark. A bit like Shoemaker-Levi did on Jupiter. Likewise there will be lots of turbulence, but it won't endure, and at the end of it all the gas giant will be unchanged (just a bit poorer). And just like a gas giant, IBM can be considered a failed star .

[Oct 30, 2018] If IBM buys Redhat then what will happen to CentOS?

Notable quotes:
"... As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation. ..."
"... Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora. ..."
"... I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration. ..."
Oct 30, 2018 | theregister.co.uk

Orv , 9 hrs

Re: If IBM buys Redhat then what will happen to CentOS?

If IBM buys Redhat then what will happen to CentOS?

As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation.

Anonymous Coward , 9 hrs
Re: "they will buy a lot of talent."

"I found that those with real talent that matched IBM needs are well looked after."

The problem is that when you have served your need you tend to get downsized as the expectation is that the cheaper offshore bodies can simply take over support etc after picking up the skills they need over a few months !!!

This 'Blue meets Red and assimilates' will be very interesting to watch and will need lots and lots of popcorn on hand !!!

:)

FrankAlphaXII , 1 day
Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora.

Kind of sad really, when I used Linux CentOS and Fedora were my go-to distros.

Missing Semicolon , 1 day
Goodbye Centos

Centos is owned by RedHat now. So why on earth would IBM bother keeping it?

Plus, of course, all the support and coding will be done in India now.....

Doctor Syntax , 17 hrs
Re: Goodbye Centos

"Centos is owned by RedHat now."

RedHat is The Upstream Vendor of Scientific Linux. What happens to them if IBM turn nasty?

Anonymous Coward , 16 hrs
Re: Scientific Linux

Scientific Linux is a good idea in theory, dreadful in practice.

The idea of a research/academic-software-focused distro is a good one: unfortunately (I say unfortunately, but it's certainly what I would do myself), increasing numbers of researchers are now developing their pet projects on Debian or Ubuntu, and so therefore often only make .deb packages available.

Anyone who has had any involvement in research software knows that if you find yourself in the position of needing to compile someone else's pet project from source, you are often in for an even more bumpy ride than usual.

And the lack of compatible RPM packages just encourages more and more researchers to go where the packages (and the free-ness) are, namely Debian and friends, which continue to gather momentum, while Red Hat continues to stagnate.

Red Hat may be very stable for running servers (as long as you don't need anything reasonably new (not bleeding edge, but at least newer than three years old)), but I have never really seen the attraction in it myself (especially as there isn't much of a "community" feeling around it, as its commercial focus gets in the way).

Roger Kynaston , 1 day
another buy out of my bread and butter

I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration.

Anonymous Coward , 1 day
Don't become a Mad Hatter

Dear Red Hatters, welcome to the world of battery hens, all clucking and clicking away to produce the elusive golden egg while the axe looms over their heads. Even if your mental health survives, you will become chicken feed by the time you are middle aged. IBM doesn't have a heart; get out while you can. Follow the example of IBMers in Australia who are jumping ship in their droves, leaving behind a crippled, demoralised workforce. Don't become a Mad Hatter.

[Oct 30, 2018] Kind of surprising since IBM was aligned with SUSE for so many years.

Oct 30, 2018 | arstechnica.com

tgx Ars Centurion reply 5 hours ago

Kind of surprising since IBM was aligned with SUSE for so many years.

IBM is an awful software company. OS/2, Lotus Notes, AIX all withered on the vine.

Doesn't bode well for RedHat.

[Oct 30, 2018] Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose

Oct 30, 2018 | arstechnica.com

lordofshadows , Smack-Fu Master, in training 6 hours ago

Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose.

All open source software is subject to corporate approval first, we know we know, to help streamline this process we have approved GNU CC and are looking into this Mak file program.

We are very pleased with systemd, we wish to further expand it's dhcp capabilities and also integrate IBM analytics -- We are also going through a rebranding operation as we feel the color red is too jarring for our customers, we will now be known as IBM Rational Hat and will only distribute through our retail channels to boost sales -- Look for us at walmart, circuit city, and staples

[Oct 30, 2018] And RH customers will want to check their contracts...

Oct 30, 2018 | arstechnica.com

CousinSven , Smack-Fu Master, in training et Subscriptor 4 hours ago New Poster

IBM are paying around 12x annual revenue for Red Hat which is a significant multiple so they will have to squeeze more money out of the business somehow. Either they grow customers or they increase margins or both.

IBM had little choice but to do something like this. They are in a terminal spiral thanks to years of bad leadership. The confused billing of the purchase smacks of rush, so far I have seen Red Hat described as a cloud company, an info sec company, an open source company...

So IBM are buying Red Hat as a last chance bid to avoid being put through the PE threshing machine. Red Hat get a ludicrous premium so will take the money.

And RH customers will want to check their contracts...

[Oct 30, 2018] IBM To Buy Red Hat, the Top Linux Distributor, For $34 Billion

Notable quotes:
"... IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses. ..."
"... IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs ..."
Oct 30, 2018 | linux.slashdot.org

Anonymous Coward , Sunday October 28, 2018 @03:34PM ( #57550555 )

Re: Damn. ( Score: 5 , Insightful)

IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses.

IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs (WAS and DB2 especially)

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:13PM ( #57550755 ) Homepage
Re:Damn. ( Score: 4 , Insightful)

IBM buys a company, fires all the transferred employees and hopes they can keep selling their acquired software without further development. If they were serious, they'd have improved their own Linux contribution efforts.

But they literally think they can somehow keep selling software without anyone with knowledge of the software, or for transferring skills to their own employees.

They literally have no interest in actual software development. It's all about sales targets.

Anonymous Coward , Monday October 29, 2018 @01:00AM ( #57552963 )
Re:Damn. ( Score: 3 , Informative)

My advice to Red Hat engineers is to get out now. I was an engineer at a company that was acquired by IBM. I was fairly senior so I stayed on and ended up retiring from the IBM, even though I hated my last few years working there. I worked for several companies during my career, from startups to fortune 100 companies. IBM was the worst place I worked by far. Consider every bad thing you've ever heard about IBM. I've heard those things too, and the reality was much worse.

IBM hasn't improved their Linux contribution efforts because it wouldn't know how. It's not for lack of talented engineers. The management culture is simply pathological. No dissent is allowed. Everyone lives in fear of a low stack ranking and getting laid off. In the end it doesn't matter anyway. Eventually the product you work on that they originally purchased becomes unprofitable and they lay you off anyway. They've long forgotten how to develop software on their own. Don't believe me? Try to think of an IBM branded software product that they built from the ground up in the last 25 years that has significant market share. Development managers chase one development fad after another hoping to find the silver bullet that will allow them to continue the relentless cost cutting regime made necessary in order to make up revenue that has been falling consistently for over a decade now.

As far as I could tell, IBM is good at two things:

  1. Financial manipulation to disguise there shrinking revenue
  2. Buying software companies and mining them for value

Yes, there are still some brilliant people that work there. But IBM is just not good at turning ideas into revenue producing products. They are nearly always unsuccessful when they try and then the go out and buy a company the succeeded in bring to market the kind of product that they tried and failed to build themselves.

They used to be good at customer support, but that is mainly lip service now. Just before I left the company I was tapped to deliver a presentation at a customer seminar. The audience did not care much about my presentation. The only thing they wanted to talk about was how much they had invested millions in re-engineering their business to use our software and now IBM appeared to be wavering in their long term commitment to supporting the product. It was all very embarrassing because I knew what they didn't, that the amount of development and support resources currently allocated to the product line were a small fraction of what they once were. After having worked there I don't know why anyone would ever want to buy a license for any of their products.

gtall ( 79522 ) , Sunday October 28, 2018 @03:59PM ( #57550691 )
Re:A Cloudy argument. ( Score: 5 , Insightful)

So you are saying that IBM has been asleep at the wheel for the last 8 years. Buying Red Hat won't save them, IBM is IBM's enemy.

Aighearach ( 97333 ) writes:
Re: ( Score: 3 )

They're already one of the large cloud providers, but you don't know that because they only focus on big customers.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:29PM ( #57550829 ) Homepage
Re:A Cloudy argument. ( Score: 5 , Insightful)

IBM engineers aren't actually crappy. It's the fucking MBAs in management who have no clue about how to run a software development company. Their engineers will want to do good work, but management will worry more about headcount and sales.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @03:57PM ( #57550679 ) Homepage
Goodbye Redhat. ( Score: 5 , Insightful)

IBM acquisitions never go well. All companies acquired by IBM go through a process of "Blue washing", in which the heart and soul of the acquired company is ripped out, the body burnt, and the remaining ashes to be devoured and defecated by its army of clueless salesmen and consultants. It's a sad, and infuriating, repeated pattern. They no longer develop internal talent. They drive away the remaining people left over from the time when they still did develop things. They think they can just buy their way into a market or technology, somehow completely oblivious to the fact that their strategy of firing all their acquired employees/knowledge and hoping to sell software they have no interest in developing would somehow still retain customers. They literally could have just reshuffled and/or hired more developers to work on the kernel, but the fact they didn't shows they have no intention of actually contributing.

Nkwe ( 604125 ) , Sunday October 28, 2018 @04:26PM ( #57550819 )
Cha-Ching ( Score: 3 )

Red Hat closed Friday at $116.68 per share, looks like the buy out is for $190. Not everyone will be unhappy with this. I hope the Red Hat employees that won't like the upcoming cultural changes have stock and options, it may soften the blow a bit.

DougDot ( 966387 ) writes: < dougr@parrot-farm.net > on Sunday October 28, 2018 @05:43PM ( #57551189 ) Homepage
AIX Redux ( Score: 5 , Interesting)

Oh, good. Now IBM can turn RH into AIX while simultaneously suffocating whatever will be left of Redhat's staff with IBM's crushing, indifferent, incompetent bureaucracy.

This is what we call a lose - lose situation. Well, except for the president of Redhat, of course. Jim Whitehurst just got rich.

Tough Love ( 215404 ) writes: on Sunday October 28, 2018 @10:55PM ( #57552583 )
Re:AIX Redux ( Score: 2 )

Worse than Redhat's crushing, indifferent, incompetent bureaucracy? Maybe, but it's close.

ArchieBunker ( 132337 ) , Sunday October 28, 2018 @06:01PM ( #57551279 ) Homepage
Re:AIX Redux ( Score: 5 , Insightful)

Redhat is damn near AIX already. AIX had binary log files long before systemd.

Antique Geekmeister ( 740220 ) , Monday October 29, 2018 @05:51AM ( #57553587 )
Re:Please God No ( Score: 4 , Informative)

The core CentOS leadership are now Red Hat employees. They're not clear of nor uninvolved in this purchase.

alvinrod ( 889928 ) , Sunday October 28, 2018 @03:46PM ( #57550623 )
Re:Please God No ( Score: 5 , Insightful)

Depends on the state. Non-compete clauses are unenforceable in some jurisdictions. IBM would want some of the people to stick around. You can't just take over a complex system from someone else and expect everything to run smoothly or know how to fix or extend it. Also, not everyone who works at Red Hat gets anything from the buyout unless they were regularly giving employees stock. A lot of people are going to want the stable paycheck of working for IBM instead of trying to start a new company.

However, some will inevitably get sick of working at IBM or end up being laid off at some point. If these people want to keep doing what they're doing, they can start a new company. If they're good at what they do, they probably won't have much trouble attracting some venture capital either.

wyattstorch516 ( 2624273 ) , Sunday October 28, 2018 @04:41PM ( #57550887 )
Re:Please God No ( Score: 2 )

Red Hat went public in 1999, they are far from being a start-up. They have acquired several companies themselves so they are just as corporate as IBM although significantly smaller.

Anonymous Coward , Sunday October 28, 2018 @05:10PM ( #57551035 )
Re:Please God No ( Score: 5 , Funny)

Look on the bright side: Poettering works for Red Hat. (Reposting because apparently Poettering has mod points.)

Anonymous Coward , Monday October 29, 2018 @09:24AM ( #57554491 )
Re: It all ( Score: 5 , Interesting)

My feelings exactly. As a former employee for both places, I see this as the death knell for Red Hat. Not immediately, not quickly, but eventually Red Hat's going to go the same way as every other company IBM has acquired.

Red Hat's doom (again, all IMO) started about 10 years ago or so when Matt Szulik left and Jim Whitehurst came on board. Nothing against Jim, but he NEVER seemed to grasp what F/OSS was about. Hell, when he came onboard he wouldn't (and never did) use Linux at all: instead he used a Mac, and so did the rest of the EMT (executive management team) over time. What company is run by people who refuse to use its own product except for one that doesn't have faith. The person on top of the BRAND AND PEOPLE team "needed" an iPad, she said, to do her work (quoting a friend in the IT dept who was asked to get it and set it up for her).

Then when they (the EMTs) wanted to move away from using F/OSS internally to outsourcing huge aspects of our infrastructure (like no longer using F/OSS for email and instead contracting with GOOGLE to do our email, calendaring and document sharing) is when, again for me, the plane started to spiral. How can we sell to OUR CUSTOMERS the idea that "Red Hat and F/OSS will suit all of your corporate needs" when, again, the people running the ship didn't think it would work for OURS? We had no special email or calendar needs, and if we did WE WERE THE LEADERS OF OPEN SOURCE, couldn't we make it do what we want? Hell, I was on an internal (but on our own time) team whose goal was to take needs like this and incubate them with an open source solution to meet that need.

But the EMTs just didn't want to do that. They were too interested in what was "the big thing" (at the time Open Shift was where all of our hiring and resources were being poured) to pay attention to the foundations that were crumbling.

And now, here we are. Red Hat is being subsumed by the largest closed-source company on the planet, one who does their job sub-optimally (to be nice). This is the end of Red Hat as we know it. Without 5-7 years Red Hat will go the way of Tivoli and Lotus: it will be a brand name that lacks any of what made the original company what it was when it was acquired.

[Oct 30, 2018] Why IBM did do the same as Oracle?

Notable quotes:
"... Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars. ..."
Oct 30, 2018 | arstechnica.com

Twilight Sparkle , Ars Scholae Palatinae 4 hours ago

Why would you pay even 34 dollars for software worth $0?

Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars.

[Oct 30, 2018] How do you say "Red Hat" in Hindi??

Oct 30, 2018 | theregister.co.uk

christie23356 , 14 hrs

Re: How do you say "Red Hat" in Hindi??

Hello)

[Oct 30, 2018] IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank. Layoffs are emminent in such situation as elimnation of headcount is one of the way to justify the price paid

Oct 30, 2018 | theregister.co.uk

Anonymous Coward 1 day

Borrowing $ at low rates

IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank... https://www.marketwatch.com/investing/stock/ibm/financials/balance-sheet.

Unlike everyone else - https://www.thestreet.com/story/14513643/1/apple-microsoft-google-are-sitting-on-crazy-amounts-of-cash.html

Jove
Over-paid ...

Looking at the Red Hat numbers, I would not want to be an existing IBM share-holder this morning; both companies missing market expectations and in need of each other to get out of the rut.

It is going to take a lot of effort to make that 63% premium pay-off. If it does not pay-off pretty quickly, the existing RedHat leadership with gone in 18 months.

P.S.

Apparently this is going to be financed by a mixture of cash and debt - increasing IBM's existing debt by nearly 50%. Possible credit rating downgrade on the way?

steviebuk
Goodbye...

...Red Hat.

No doubt IBM will scare off all the decent employees that make it what it is.

SecretSonOfHG
RH employees will start to jump ship

As soon as they have a minimum of experience with the terrible IBM change management processes, the many layers of bureocracy and management involved and the zero or negative value they add to anything at all.

IBM is a shinking ship, the only question being how long it will take to happen. Anyone thinkin RH has any future other than languish and disappear under IBM management is dellusional. Or a IBM stock owner.

Jove
Product lines EoL ...

What get's the chop because it either does not fit in with the Hybrid-Cloud model, or does not generate sufficient margin?

cloth
But *Why* did they buy them?

I'm still trying to figure out "why" they bought Red hat.

The only thing my not insignificant google trawling can find me is that Red Hat sell to the likes of Microsoft and google - now, that *is* interesting. IBM seem to be saying that they can't compete directly but they will sell upwards to their overlords - no ?

Anonymous Coward

Re: But *Why* did they buy them?

As far as I can tell, it is be part of IBM's cloud (or hybrid cloud) strategy. RH have become/are becoming increasingly successful in this arena.

If I was being cynical, I would also say that it will enable IBM to put the RH brand and appropriate open source soundbites on the table for deal-making and sales with or without the RH workforce and philosophy. Also, RH's subscription-base must figure greatly here - a list of perfect customers ripe for "upselling". bazza

Re: But *Why* did they buy them?

I'm fairly convinced that it's because of who uses RedHat. Certainly a lot of financial institutions do, they're in the market for commercial support (the OS cost itself is irrelevant). You can tell this by looking at the prices RedHat were charging for RedHat MRG - beloved by the high speed share traders. To say eye-watering, PER ANNUM too, is an understatement. You'd have to have got deep pockets before such prices became ignorable.

IBM is a business services company that just happens to make hardware and write OSes. RedHat has a lot of customers interested in business services. The ones I think who will be kicking themselves are Hewlett Packard (or whatever they're called these days).

tfb
Re: But *Why* did they buy them?

Because AIX and RHEL are the two remaining enterprise unixoid platforms (Solaris & HPUX are moribund and the other players are pretty small). Now both of those are owned by IBM: they now own the enterprise unixoid market.

theblackhand
Re: But *Why* did they buy them?

"I'm still trying to figure out "why" they bought Red hat."

What they say? It somehow helps them with cloud. Doesn't sound like much money there - certainly not enough to justify the significant increase in debt (~US$17B).

What could it be then? Well RedHat pushed up support prices and their customers didn't squeal much. A lot of those big enterprise customers moved from expensive hardware/expensive OS support over the last ten years to x86 with much cheaper OS support so there's plenty of scope for squeezing more.

[Oct 29, 2018] The D in Systemd stands for 'Dammmmit!'

Oct 29, 2018 | lxer.com

A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box... Systemd creator Leonard Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

[Oct 29, 2018] If I (hypothetically) worked for a company acquired by Big Blue

Oct 29, 2018 | arstechnica.com

MagicDot / Ars Praetorian reply 6 hours ago

If I (hypothetically) worked for a company acquired by Big Blue, I would offer the following:

  • Say hello to good salaries.
  • Say goodbye to perks, bonuses, and your company culture...oh, and you can't work from home anymore.

...but this is all hypothetical. \

Belisarius , Ars Tribunus Angusticlavius et Subscriptor 5 hours ago

sviola wrote:
show nested quotes
I can see what each company will get out of the deal and how they might potentially benefit. However, Red Hat's culture is integral to their success. Both company CEOs were asked today at an all-hands meeting about how they intend to keep the promise of remaining distinct and maintaining the RH culture without IBM suffocating it. Nothing is supposed to change (for now), but IBM has a track record of driving successful companies and open dynamic cultures into the ground. Many, many eyes will be watching this.

Hopefully IBM current top Brass will be smart and give some autonomy to Red Hat and leave it to its own management style. Of course, that will only happen if they deliver IBM goals (and that will probably mean high double digit y2y growth) on regular basis...

One thing is sure, they'll probably kill any overlapsing product in the medium term (who will survive between JBOSS and Websphere is an open bet).

(On a dream side note, maybe, just maybe they'll move some of their software development to Red Hat)

Good luck. Every CEO thinks they're the latest incarnation of Adam Smith, and they're all dying to be seen as "doing something." Doing nothing, while sometimes a really smart thing and oftentimes the right thing to do, isn't looked upon favorably these days in American business. IBM will definitely do something with Red Hat; it's just a matter of what.

[Oct 29, 2018] IBM to acquire software company Red Hat for $34 billion

Oct 29, 2018 | finance.yahoo.com

BIG BLUE

IBM was founded in 1911 and is known in the technology industry as Big Blue, a reference to its once ubiquitous blue computers. It has faced years of revenue declines, as it transitions its legacy computer maker business into new technology products and services. Its recent initiatives have included artificial intelligence and business lines around Watson, named after the supercomputer it developed.

To be sure, IBM is no stranger to acquisitions. It acquired cloud infrastructure provider Softlayer in 2013 for $2 billion, and the Weather Channel's data assets for more than $2 billion in 2015. It also acquired Canadian business software maker Cognos in 2008 for $5 billion.

Other big technology companies have also recently sought to reinvent themselves through acquisitions. Microsoft this year acquired open source software platform GitHub for $7.5 billion; chip maker Broadcom Inc agreed to acquire software maker CA Inc for nearly $19 billion; and Adobe Inc agreed to acquire marketing software maker Marketo for $5 billion.

One of IBM's main competitors, Dell Technologies Inc, made a big bet on software and cloud computing two years ago, when it acquired data storage company EMC for $67 billion. As part of that deal, Dell inherited an 82 percent stake in virtualization software company VMware Inc.

The deal between IBM and Red Hat is expected to close in the second half of 2019. IBM said it planned to suspend its share repurchase program in 2020 and 2021 to help pay for the deal.

IBM said Red Hat would continue to be led by Red Hat CEO Jim Whitehurst and Red Hat's current management team. It intends to maintain Red Hat's headquarters, facilities, brands and practices.

[Oct 28, 2018] In Desperation Move, IBM Buys Red Hat For $34 Billion In Largest Ever Acquisition

Oct 28, 2018 | www.zerohedge.com

In what can only be described as a desperation move, IBM announced that it would acquire Linux distributor Red Hat for a whopping $33.4 billion, its biggest purchase ever, as the company scrambles to catch up to the competition and to boost its flagging cloud sales. Still hurting from its Q3 earnings , which sent its stock tumbling to the lowest level since 2010 after Wall Street was disappointed by yet another quarter of declining revenue...

... IBM will pay $190 for the Raleigh, NC-based Red Hat , a 63% premium to the company's stock price, which closed at $116.68 on Friday, and down 3% on the year.

In the statement, IBM CEO Ginni Rometty said that "the acquisition of Red Hat is a game-changer. It changes everything about the cloud market," but what the acquisition really means is that the company has thrown in the towel in years of accounting gimmicks and attempts to paint lipstick on a pig with the help of ever lower tax rates and pro forma addbacks, and instead will now "kitchen sink" its endless income statement troubles and non-GAAP adjustments in the form of massive purchase accounting tricks for the next several years.

While Rometty has been pushing hard to transition the 107-year-old company into modern business such as the cloud, AI and security software, the company's recent improvements had been largely from IBM's legacy mainframe business, rather than its so-called strategic imperatives. Meanwhile, revenues have continued the shrink and after a brief rebound, sales dipped once again this quarter, after an unprecedented period of 22 consecutive declines starting in 2012, when Rometty took over as CEO.

[Oct 26, 2018] RHCSA Rapid Track course with exam - RH200

The cost is $3,895 USD (Plus all applicable taxes) or 13 Training Units
Oct 08, 2018 | www.redhat.com
Course overview On completion of course materials, students should be prepared to take the Red Hat Certified System Administrator (RHCSA) exam. This version of the course includes the exam.

Note: This course builds on a student's existing understanding of command-line based Linux system administration. Students should be able to execute common commands using the shell, work with common command options, and access man pages for help. Students lacking this knowledge are strongly encouraged to take Red Hat System Administration I (RH124) and II (RH134) instead.

Course content summary Outline for this course
Accessing the command line
Log in to a Linux system and run simple commands using the shell.
Managing files from the command line
Work with files from the bash shell prompt.
Managing local Linux users and groups
Manage Linux users and groups and administer local password policies.
Controlling access to files with Linux file system permissions
Set access permissions on files and interpret the security effects of different permission settings.
Managing SELinux security
Use SELinux to manage access to files and interpret and troubleshoot SELinux security effects.
Monitoring and managing Linux processes
Monitor and control processes running on the system.
Installing and updating software packages
Download, install, update, and manage software packages from Red Hat and yum package repositories.
Controlling services and daemons
Control and monitor network services and system daemons using systemd.
Managing Red Hat Enterprise Linux networking
Configure basic IPv4 networking on Red Hat Enterprise Linux systems.
Analyzing and storing logs
Locate and interpret relevant system log files for troubleshooting purposes.
Managing storage and file systems
Create and use disk partitions, logical volumes, file systems, and swap spaces.
Scheduling system tasks
Schedule recurring system tasks using cron and systemd timer units.
Mounting network file systems
Mount network file system (NFS) exports and server message block (SMB) shares from network file servers.
Limiting network communication with firewalld
Configure a basic local firewall.
Virtualization and kickstart
Manage KVMs and install them with Red Hat Enterprise Linux using Kickstart.

[Oct 17, 2018] How to upgrade Red Hat Linux 6.9 to 7.4 Experiences Sharing by Comnet

Oct 17, 2018 | mycomnet.info

Red Hat is a one of Linux distribution among many others such as Ubuntu, CentOS, Fedora, and others. Many servers around the world use Red Hat to run their server.

I have recently had to do an upgrade to one of our clients from Red Hat Linux 6.8 to 7.4 and I would like to show you how I have set up a lap to test the tools to upgrade. I recommend you to duplicate the environment of production server for lab testing.

Please note the following below

· This guide aims to show you the tools Red Hat has given us to upgrade from version 6 to version 7

· The environment is only on VM which is not reflect the actual environment of our client, therefore, there could be some different outcome when trying to upgrade on production server!!!

· I assume you can install Red Hat on your VM which could be Virtual Box, VMware or any other VM application you familiar with. (Mine is Virtual Box).

Precaution: Please backup your system before running the upgrade to in case anything happens and you might need to fresh install Red Hat 7.4 or roll back to Red Hat 6.9

1. First thing first, check your current Red Hat version. Mine was Red Hat 6.9

2. You should be sure to update your Red Hat 6 to the latest version before attempt the preupgrade tools. So, do 'yum update' to update to latest Red Hat 6.

If the error shows as below. These require an internet connection to connect to Red Hat server.

Be sure to register your subscription carefully again. I have found that for the system that runs for a long time like my client. I have to unregister and register again for 'yum update' to properly run.

Refer to thread on https://access.redhat.com/discussions/3066851?tour=8

3. For some system, the error might show something like 'It is registered but cannot get update' The methods are the same try to run these command in sequence. Sometimes you don't have to unregister, only refresh and attach –auto might do the trick.

sudo subscription-manager remove --all

sudo subscription-manager unregister

sudo subscription-manager clean

Now you can re-register the system, attach the subscriptions

sudo subscription-manager register

sudo subscription-manager refresh

sudo subscription-manager attach --auto

Note that when you unregister, your server is not down, this is only an unregister Red Hat subscription meaning you cannot get any update from them but your server can still be running.

After that, you should now be able to do 'yum update' then download the update. At the end, you should see the screen below which mean you can now proceed to upgrade procedure.

4. Then enable your subscription to the repository of Preupgrade Assistant.

Then install the Preupgrade Tools

5. Once you have installed everything, run the pre upgrade tool, it should take awhile. This tool will examine every package in the system and determine if there could be any error you need to fix before an upgrade. In my experience, I found solutions to most errors by googling the Internet, but it may not always work for your environment.

After preupg is finished running, please check the file in '/root/preupgrade/result.html' which can view in any browser. You could transfer the file the computer that has browser.

The file result.html will show all necessary information about your system before an upgrade. Basically, if you see information like 5.1 on the screen, you good to go.

See 5.2 for all the result after running the Preupgrade tool, be sure to check them all. I found that some information is just informational but please check the 'needs_action' section carefully.

Go down and you will see specific information about result. Check Remediation description for any 'needs_action' to perform the suggested instruction.

6. So, you have checked everything from the Preupgrade tool. Now it's time to start an upgrade.

6.1 Install the upgrade tool

[root@localhost ~]# yum -y install redhat-upgrade-tool

6.2 Disable all active repository

[root@localhost ~]# yum -y install yum-utils

[root@localhost ~]# yum-config-manager --disable \*

Now start an upgrade. I recommend you to save iso file of Red Hat 7.4 to the server then issue the command like below. It's easier. Although, you could use other option like

--device [DEV]

Device or mount point of mounted install media. If DEV is omitted,

redhat-upgrade-tool will scan all currently-mounted removable devices

(for example USB disks and optical media).

--network RELEASEVER

Online repos. RELEASEVER will be used to replace $releasever variable

if it occurs in some repo URL.

[root@localhost /]# cd /root/

[root@localhost ~]# ls
anaconda-ks.cfg  install.log.syslog  preupgrade          rhel-server-7.4-x86_64-dvd.iso

install.log      playground          preupgrade-results

[root@localhost ~]# redhat-upgrade-tool --iso rhel-server-7.4-x86_64-dvd.iso

Then reboot

[root@localhost ~]# reboot

7. Upgrade is now completed. Check your version after upgrade!!!! Then don't forget to check other software and functionality that it runs correctly

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.4 (Maipo)

---------------------------------------------------------------------------------------------------------------------------

I hope the information could guide you through how to upgrade Red Hat Linux 6.9 to 7.4 more or less. I'm also new to Red Hat myself and still have a lot to learn.

Please let me know your experience of upgrade your own Red Hat or if you have any questions, I would try my best to help. Thanks for reading!

Reference Sites

1. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-upgrading

2. https://access.redhat.com/discussions/3066851?tour=8 โพสท์ใน Technology | ติดป้ายกำกับ LINUX , RHEL | ใส่ความเห็น

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.

How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 15, 2018] Systemd as doord interface for cars ;-) by Nico Schottelius

Highly recommended!
Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors ..."
Oct 15, 2018 | blog.ungleich.ch

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

[Oct 15, 2018] Future History of Init Systems

Oct 15, 2018 | linux.slashdot.org

AntiSol ( 1329733 ) , Saturday August 29, 2015 @03:52PM ( #50417111 )

Re:Approaching the Singularity ( Score: 4 , Funny)

Future History of Init Systems

Future History of Init Systems
  • 2015: systemd becomes default boot manager in debian.
  • 2017: "complete, from-scratch rewrite" [jwz.org]. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as "project skynet". Software behaves paradoxically and project is terminated.
  • 2033: systeml - "system lean" - a "back to basics", from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, "system mean", a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces "Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity". Millions die. The survivors declare "thou shalt not make an init system in the likeness of the human mind" as their highest law.
  • 2147: systemv - a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of "keep it simple, stupid" and "do one thing, and do it well". People's computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.

[Oct 15, 2018] I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Notable quotes:
"... Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes. No one knows why. The binary log file was corrupted in the process and is unrecoverable. ..."
Oct 15, 2018 | linux.slashdot.org

thegarbz ( 1787294 ) , Sunday August 30, 2015 @04:08AM ( #50419549 )

Re:Hang on a minute... ( Score: 5 , Funny)
I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes. No one knows why. The binary log file was corrupted in the process and is unrecoverable.

All anyone could remember is a bug listed in the systemd bug tracker talking about su which was classified as WON'T FIX as the developer thought it was a broken concept.

[Oct 15, 2018] Oh look, another Powershell

Notable quotes:
"... Upcoming systemd re-implementations of standard utilities: ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ... ..."
Oct 15, 2018 | linux.slashdot.org

Anonymous Coward , Saturday August 29, 2015 @11:37AM ( #50415825 )

Cryptic command names ( Score: 5 , Funny)

Great to see that systemd is finally doing something about all of those cryptic command names that plague the unix ecosystem.

Upcoming systemd re-implementations of standard utilities: ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ...

Anonymous Coward , Saturday August 29, 2015 @11:58AM ( #50415939 )
Re: Cryptic command names ( Score: 3 , Funny)

Oh look, another Powershell

[Oct 15, 2018] They should have just rename the machinectl into command.com.

Oct 15, 2018 | linux.slashdot.org

RabidReindeer ( 2625839 ) , Saturday August 29, 2015 @11:38AM ( #50415833 )

What's with all the awkward systemd command names? ( Score: 5 , Insightful)

I know systemd sneers at the old Unix convention of keeping it simple, keeping it separate, but that's not the only convention they spit on. God intended Unix (Linux) commands to be cryptic things 2-4 letters long (like "su", for example). Not "systemctl", "machinectl", "journalctl", etc. Might as well just give everything a 47-character long multi-word command like the old Apple commando shell did.

Seriously, though, when you're banging through system commands all day long, it gets old and their choices aren't especially friendly to tab completion. On top of which why is "machinectl" a shell and not some sort of hardware function? They should have just named the bloody thing command.com.

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 08, 2018] RHCSA Exam Training by Infinite Skills

For $29.99 they provide a course with 6.5 hours of on demand video
Oct 08, 2018 | www.udemy.com

Description

This Red Hat Certified Systems Administrator Exam EX200 training course from Infinite Skills will teach you everything you need to know to become a Red Hat Certified System Administrator (RHCSA) and pass the EX200 Exam. This course is designed for users that are familiar with Red Hat Enterprise Linux environments.

You will start by learning the fundamentals, such as basic shell commands, creating and modifying users, and changing passwords. The course will then teach you about the shell, explaining how to manage files, use the stream editor, and locate files. This video tutorial will also cover system management, including booting and rebooting, network services, and installing packages. Other topics that are covered include storage management, server management, virtual machines, and security.

Once you have completed this computer based training course, you will be fully capable of taking the RHCSA EX200 exam and becoming a Red Hat Certified System Administrator.

*Infinite Skills has no affiliation with Red Hat, Inc. The Red Hat trademark is used for identification purposes only and is not intended to indicate affiliation with or approval by Red Hat, Inc

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Aug 07, 2018] May I sort the -etc-group and -etc-passwd files

Aug 07, 2018 | unix.stackexchange.com

Ned64 ,Feb 18 at 13:52

My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure.

May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers?

I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something?

The same goes for /etc/*shadow .

Kevin ,Feb 19 at 23:50

"Editing has now become a little cumbersome due to the lack of structure" Why are you editing those files by hand? – Kevin Feb 19 at 23:50

Barmar ,Feb 21 at 20:51

How does sorting the file help with editing? Is it because you want to group related accounts together, and then do similar changes in a range of rows? But will related account be adjacent if you sort by uid or name? – Barmar Feb 21 at 20:51

Ned64 ,Mar 13 at 23:15

@Barmar It has helped mainly because user accounts are grouped by ranges and separate from system accounts (when sorting by UID). Therefore it is easier e.g. to spot the correct line to examine or change when editing with vi . – Ned64 Mar 13 at 23:15

ErikF ,Feb 18 at 14:12

You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.

hvd ,Feb 18 at 22:59

@Menasheh This site's colours don't make them stand out as much as on other sites, but "OK doing this" in this answer is a hyperlink. – hvd Feb 18 at 22:59

mickeyf ,Feb 19 at 14:05

OK, fine, but... In general, are there valid reasons to manually edit /etc/passwd and similar files? Isn't it considered better to access these via the tools that are designed to create and modify them? – mickeyf Feb 19 at 14:05

ErikF ,Feb 20 at 21:21

@mickeyf I've seen people manually edit /etc/passwd when they're making batch changes, like changing the GECOS field for all users due to moving/restructuring (global room or phone number changes, etc.) It's not common anymore, but there are specific reasons that crop up from time to time. – ErikF Feb 20 at 21:21

hvd ,Feb 18 at 17:28

Although ErikF is correct that this should generally be okay, I do want to point out one potential issue:

You're allowed to map different usernames to the same UID. If you make use of this, tools that map a UID back to a username will generally pick the first username they find for that UID in /etc/passwd . Sorting may cause a different username to appear first. For display purposes (e.g. ls -l output), either username should work, but it's possible that you've configured some program to accept requests from username A, where it will deny those requests if it sees them coming from username B, even if A and B are the same user.

Rui F Ribeiro ,Feb 19 at 17:53

Having root at first line has been a long time de facto "standard" and is very convenient if you ever have to fix their shell or delete the password, when dealing with problems or recovering systems.

Likewise I prefer to have daemons/utils users in the middle and standard users at the end of both passwd and shadow .

hvd answer is also very good about disturbing the users order, especially in systems with many users maintained by hand.

If you somewhat manage to sort the files, for instance, only for standard users, it would be more sensible than changing the order of all users, imo.

Barmar ,Feb 21 at 20:13

If you sort numerically by UID, you should get your preferred order. Root is always 0 , and daemons conventionally have UIDs under 100. – Barmar Feb 21 at 20:13

Rui F Ribeiro ,Feb 21 at 20:16

@Barmar If sorting by UID and not by name, indeed, thanks for remembering. – Rui F Ribeiro Feb 21 at 20:16

[Aug 07, 2018] Consistency checking of /etc/passwd and /etc/shadow

Aug 07, 2018 | linux-audit.com

Linux distributions usually provide a pwck utility. This small utility will check the consistency of both files and state any specific issues. By specifying the -r it may run in read-only mode.

Example when running pwck on /etc/passwd and /etc/shadow file

[Aug 07, 2018] passwd - Copying Linux users and passwords to a new server

Aug 07, 2018 | serverfault.com
I am migrating over a server to new hardware. A part of the system will be rebuild. What files and directories are needed to copy so that usernames, passwords, groups, file ownership and file permissions stay intact?

Ubuntu 12.04 LTS. linux passwd share | improve this question asked Mar 20 '14 at 7:47

Mikko Ohtamaa, Mar 20 '14 at 7:54

/etc/passwd - user account information less the encrypted passwords 
/etc/shadow - contains encrypted passwords 
/etc/group - user group information 
/etc/gshadow - - group encrypted passwords

Be sure to ensure that the permissions on the files are correct too share | improve this answer edited Mar 20 '14 at 9:48 answered

Iain 102k 13 154 250

| show 4 more comments up vote 13 down vote

I did this with Gentoo Linux already and copied:

that's it.

If the files on the other machine have different owner IDs, you might change them to the ones on /etc/group and /etc/passwd and then you have the effective permissions restored. share | improve this answer edited Mar 20 '14 at 11:52 answered Mar 20 '14 at 7:53

vanthome 560 3 10

Be careful that you don't delete or renumber system accounts when copying over the files mentioned in the other answers. System services don't usually have fixed user ids, and if you've installed the packages in a different order to the original machine (which is very likely if it was long-lived), then they'll end up in a different order. I tend to copy those files to somewhere like /root/saved-from-old-system and hand-edit them in order to just copy the non-system accounts. (There's probably a tool for this, but I don't tend to copy systems like this often enough to warrant investigating one.)Mar 26 '14 at 5:36

[Jul 30, 2018] Configuring sudo Access

Jul 30, 2018 | access.redhat.com

Note A Red Hat training course is available for RHCSA Rapid Track Course . The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user. When users given access via this mechanism precede an administrative command with sudo they are prompted to enter their own password. Once authenticated, and assuming the command is permitted, the administrative command is executed as if run by the root user. Follow this procedure to create a normal user account and give it sudo access. You will then be able to use the sudo command from this user account to execute administrative commands without logging in to the account of the root user.

Procedure 2.2. Configuring sudo Access

  1. Log in to the system as the root user.
  2. Create a normal user account using the useradd command. Replace USERNAME with the user name that you wish to create.
    # useradd USERNAME
  3. Set a password for the new user using the passwd command.
    # passwd USERNAME
    Changing password for user USERNAME.
    New password: 
    Retype new password: 
    passwd: all authentication tokens updated successfully.
    
  4. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by the sudo command.
    # visudo
    
  5. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
    ## Allows people in group wheel to run all commands
    # %wheel        ALL=(ALL)       ALL
    
  6. Remove the comment character ( # ) at the start of the second line. This enables the configuration option.
  7. Save your changes and exit the editor.
  8. Add the user you created to the wheel group using the usermod command.
    # usermod -aG wheel USERNAME
    
  9. Test that the updated configuration allows the user you created to run commands using sudo .
    1. Use the su to switch to the new user account that you created.
      # su USERNAME -
      
    2. Use the groups to verify that the user is in the wheel group.
      $ groups
      USERNAME wheel
      
    3. Use the sudo command to run the whoami command. As this is the first time you have run a command using sudo from this user account the banner message will be displayed. You will be also be prompted to enter the password for the user account.
      $ sudo whoami
      We trust you have received the usual lecture from the local System
      Administrator. It usually boils down to these three things:
      
          #1) Respect the privacy of others.
          #2) Think before you type.
          #3) With great power comes great responsibility.
      
      [sudo] password for USERNAME:
      root
      
      The last line of the output is the user name returned by the whoami command. If sudo is configured correctly this value will be root .
You have successfully configured a user with sudo access. You can now log in to this user account and use sudo to run commands as if you were logged in to the account of the root user.

[Jul 30, 2018] 10 Useful Sudoers Configurations for Setting 'sudo' in Linux

Jul 30, 2018 | www.tecmint.com

Below are ten /etc/sudoers file configurations to modify the behavior of sudo command using Defaults entries.

$ sudo cat /etc/sudoers
/etc/sudoers File
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Defaults        logfile="/var/log/sudo.log"
Defaults        lecture="always"
Defaults        badpass_message="Password is wrong, please try again"
Defaults        passwd_tries=5
Defaults        insults
Defaults        log_input,log_output
Types of Defaults Entries
Defaults                parameter,   parameter_list     #affect all users on any host
Defaults@Host_List      parameter,   parameter_list     #affects all users on a specific host
Defaults:User_List      parameter,   parameter_list     #affects a specific user
Defaults!Cmnd_List      parameter,   parameter_list     #affects  a specific command 
Defaults>Runas_List     parameter,   parameter_list     #affects commands being run as a specific user

For the scope of this guide, we will zero down to the first type of Defaults in the forms below. Parameters may be flags, integer values, strings, or lists.

You should note that flags are implicitly boolean and can be turned off using the '!' operator, and lists have two additional assignment operators, += (add to list) and -= (remove from list).

Defaults     parameter
OR
Defaults     parameter=value
OR
Defaults     parameter -=value   
Defaults     parameter +=value  
OR
Defaults     !parameter

[Jul 30, 2018] Configuring sudo and adding users to Wheel group

Here you can find additional example of access to all command in a particular directory via sudo...
Formatting changed and some errors corrected...
Nov 28, 2014 | linuxnlenux.wordpress.com
If a server needs to be administered by a number of people it is normally not a good idea for them all to use the root account. This is because it becomes difficult to determine exactly who did what, when and where if everyone logs in with the same credentials. The sudo utility was designed to overcome this difficulty.

With sudo (which stands for "superuser do"), you can delegate a limited set of administrative responsibilities to other users, who are strictly limited to the commands you allow them. sudo creates a thorough audit trail, so everything users do gets logged; if users somehow manage to do something they shouldn't have, you'll be able to detect it and apply the needed fixes. You can even configure sudo centrally, so its permissions apply to several hosts.

The privileged command you want to run must first begin with the word sudo followed by the command's regular syntax. When running the command with the sudo prefix, you will be prompted for your regular password before it is executed. You may run other privileged commands using sudo within a five-minute period without being re-prompted for a password. All commands run as sudo are logged in the log file /var/log/messages.

The sudo configuration file is /etc/sudoers . We should never edit this file manually. Instead, use the visudo command: # visudo

This protects from conflicts (when two admins edit this file at the same time) and guarantees that the right syntax is used (the permission bits are correct). The program uses Vi text editor.

All Access to Specific Users

You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.

user1, user2 ALL=(ALL) ALL

This is generally not a good idea because this allows user1 and user2 to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo.

Access To Specific Users To Specific Files

This entry allows user1 and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/apps/check.pl.

user1, %operator ALL= /sbin/, /usr/sbin/, /usr/apps/check.pl

Access to Specific Files as Another User

user1 ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

Access Without Needing Passwords

This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password.

%operator ALL= NOPASSWD: /sbin/

Adding users to the wheel group

The wheel group is a legacy from UNIX. When a server had to be maintained at a higher level than the day-to-day system administrator, root rights were often required. The 'wheel' group was used to create a pool of user accounts that were allowed to get that level of access to the server. If you weren't in the 'wheel' group, you were denied access to root.

Edit the configuration file (/etc/sudoers) with visudo and change these lines:

# Uncomment to allow people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

To this (as recommended):

# Uncomment to allow people in group wheel to run all commands
%wheel ALL=(ALL) ALL

This will allow anyone in the wheel group to execute commands using sudo (rather than having to add each person one by one).

Now finally use the following command to add any user (e.g- user1) to Wheel group

# usermod -G wheel user1

[Jul 30, 2018] Non-root user getting root access after running sudo vi -etc-hosts

Notable quotes:
"... as the original user ..."
Jul 30, 2018 | unix.stackexchange.com

Gilles, Mar 10, 2018 at 10:24

If sudo vi /etc/hosts is successful, it means that the system administrator has allowed the user to run vi /etc/hosts as root. That's the whole point of sudo: it lets the system administrator authorize certain users to run certain commands with extra privileges.

Giving a user the permission to run vi gives them the permission to run any vi command, including :sh to run a shell and :w to overwrite any file on the system. A rule allowing only to run vi /etc/hosts does not make any sense since it allows the user to run arbitrary commands.

There is no "hacking" involved. The breach of security comes from a misconfiguration, not from a hole in the security model. Sudo does not particularly try to prevent against misconfiguration. Its documentation is well-known to be difficult to understand; if in doubt, ask around and don't try to do things that are too complicated.

It is in general a hard problem to give a user a specific privilege without giving them more than intended. A bulldozer approach like giving them the right to run an interactive program such as vi is bound to fail. A general piece of advice is to give the minimum privileges necessary to accomplish the task. If you want to allow a user to modify one file, don't give them the permission to run an editor. Instead, either:

  • Give them the permission to write to the file. This is the simplest method with the least risk of doing something you didn't intend.
    setfacl u:bob:rw /etc/hosts
    
  • Give them permission to edit the file via sudo. To do that, don't give them the permission to run an editor. As explained in the sudo documentation, give them the permission to run sudoedit , which invokes an editor as the original user and then uses the extra privileges only to modify the file.
    bob ALL = sudoedit /etc/hosts

    The sudo method is more complicated to set up, and is less transparent for the user because they have to invoke sudoedit instead of just opening the file in their editor, but has the advantage that all accesses are logged.

Note that allowing a user to edit /etc/hosts may have an impact on your security infrastructure: if there's any place where you rely on a host name corresponding to a specific machine, then that user will be able to point it to a different machine. Consider that it is probably unnecessary anyway .

[Jun 21, 2018] Create a Sudo Log File by Aaron Kili

Jun 21, 2018 | www.tecmint.com

By default, sudo logs through syslog(3). However, to specify a custom log file, use the logfile parameter like so:

Defaults  logfile="/var/log/sudo.log"

To log hostname and the four-digit year in the custom log file, use log_host and log_year parameters respectively as follows:

Defaults  log_host, log_year, logfile="/var/log/sudo.log"
Log Sudo Command Input/Output

The log_input and log_output parameters enable sudo to run a command in pseudo-tty and log all user input and all output sent to the screen receptively.

The default I/O log directory is /var/log/sudo-io , and if there is a session sequence number, it is stored in this directory. You can specify a custom directory through the iolog_dir parameter.

Defaults   log_input, log_output

There are some escape sequences are supported such as %{seq} which expands to a monotonically increasing base-36 sequence number, such as 000001, where every two digits are used to form a new directory, e.g. 00/00/01 as in the example below: